



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A research article published in the Pakistan Journal of Ophthalmology in 2017, which investigates the reliability of rubrics in Mini-CEX (Mini-Clinical Evaluation Exercise) for ophthalmic examinations. The study was conducted at the Ophthalmological Society of Pakistan, Lahore branch, and involved sixteen raters evaluating the clinical performance of a cover/uncover (squint assessment) test using a provided rubric. The results showed a high inter-rater reliability and internal consistency of scores, indicating the effectiveness of rubrics in achieving consistent and clear assessments of clinical skills.
Typology: Lecture notes
1 / 6
This page cannot be seen from the preview
Don't miss anything!
Original Article
Pak J Ophthalmol 2017, Vol. 33, No. 1
............................................................... ...... ................................
See end of article for
authors affiliations
…..………………………..
Correspondence to:
Anam Arshad
Postgraduate Trainee,
Postgraduate Medical Institute
Lahore.
Email: anam_1038@hotmail.com
…..………………………..
Purpose: To study the reliability of rubrics in mini clinical exercise (CEX) in Ophthalmic examination.
Study Design: Observational cross sectional study.
Place and Duration of Study: Our study was conducted at the ophthalmological society of Pakistan, Lahore branch on Sep 17, 2015.
Material and Methods: 16 raters were recruited from the candidates eligible for fellowship exit exam. All these raters were provided with a rubric to evaluate the clinical performance of cover/uncover (squint assessment) test.. Every rater gave scores (2-5) for 12 steps of the clinical examination. All scores were entered into SPSS version 20 and Cronbachs’ alpha coefficient of inter rater reliability and internal consistency of scores was determined.
Results: 16 raters having age range from 26 - 35 years with mean age of 29. SD ± 1.99 took part in this study. Out of them 7 were male and 9 were female. The Cronbach Alpha (0.972) was found to be very significant after analyzing the scores of the sixteen raters in SPSS. The intra class correlation co-efficient was found to be .967. Descriptive statistics showed that sixteen raters gave a rating between 3.3 to 4.0 for each step of the rubric.
Conclusion: Rubrics are effective in achieving a high inter rater reliability in mini-CEX and make it a very useful tool in assessment of clinical skills.
Keywords: Rubrics, mini-CEX, inter rater reliability, variability.
linical Skills of residents in many specialty training programs have been assessed by using mini-clinical evaluation exercise (mini- CEX). This tool provides both assessment and
education for residents in training^1 and its validity has been established^2. The mini-CEX is also a feasible and reliable evaluation tool for post graduate residency training^3. The number of feedback comments make the
RELIABILITY OF RUBRIC IN MINI-CEX
mini-CEX a useful assessment tool^4. To some extent, such a tool may predict the future performance of medical students^5. The mini-CEX has been well received by both learners and supervisors^6.
Resident performance which is valid is required by all program directors for certification of competence of all trainees completing their residency7,8. However, assessments which are valid in assessing clinical skills can be challenging^9. Long case clinical evaluation exercise (CEX) has been proven to be unreliable in a research conducted by the American Board of Internal Medicine (ABIM) because the inter- rater and inter-case reliability is quite high10,11,12. Validity of mini-CEX scores could be better if the inter rater reliability was improved which would also lead to reduction in resident-patient encounters^13. Consistency of examiner ratings is necessary to improve reliability of assessment^14.
Use of topic-specific analytical rubrics can improve the reliability of performance scoring of assessments especially with examples and/or training of raters^15. Introduction of Rubrics in assessment make the criteria and expectations very clear and also facilitate self-assessment and feedback. This is the reason why learning is promoted and instruction is enhanced by the use of rubrics^15. We undertook this
study to find out the reliability of rubric in mini-CEX as a reliable tool of assessment.
Our study was conducted at the ophthalmological society of Pakistan, Lahore branch on Sep 17, 2015. It was observational cross sectional study by randomized non-probability consecutive convenient sampling technique. Sixteen raters were recruited from the candidates eligible for fellowship exit exam, who were attending a pre examination preparatory course on clinical ophthalmology. A consent was signed by the raters and their names and all other details were kept confidential. All these raters were provided with a rubric set to evaluate the clinical performance of cover/uncover (squint assessment) test, figure 1. All the raters gave scores to the steps of single clinical performance by junior resident. Every rater gave scores (2-5) for 12 steps of the clinical examination method. All scores were entered into SPSS version 20 and Cronbachs’ alpha coefficient of inter rater reliability and internal consistency of scores was determined. Raters with incorrectly filled forms were excluded from the study. A demonstration about how to fill the rubric was given to all the participants before the actual test.
Figure 1: Resident Assessment Form (cover/uncover test).
Skill Novice (Score 2)
Beginner (Score 3)
Advanced Beginner (Score 4)
Competent (Score 5)
Total Score
Introduction Not introduced
Introduced as doctor Didn’t ask patient name
Introduced as doctor Ask patient name
Inquired patients name and well being
Informed Consent
No consent Didn’t explain procedure
Didn’t insist on fixation Didn’t ask about refractive error
Fully explained the procedure
Examination level
Didn’t adjust Inaccurate adjustment
Awkward adjustment
Accurate proper adjustment
Visual acuity Not assessed Assessed for near only
Assessed for far and near
Asked for snellens. Assessed unaided and aided VA Recorded VA
Hirschberg Didn’t perform
Didn’t ask patient to look at spot light
Asked to fixate at light but light not held properly and
Asked to fixate light held centrally and stable
RELIABILITY OF RUBRIC IN MINI-CEX
Average measures
One-way random effect
Table 4: Inter rater reliability: Mean and Standard deviation.
Rater Mean Standard Deviation Number
1 3.3 ± 0 .77 12 2 4.0 ± 1.1 12 3 4.2 ± 1.1 12 4 3.4 ±. 90 12 5 3.7 ± 1.1 12 6 3.5 ± 1.0 12 7 3.5 ± 1.0 12 8 3.2 ± .75 12 9 3.8 ± .93 12 10 3.3 ± .88 12 11 3.4 ± .79 12 12 3.4 ± 90 12 13 4.0 ± 1.2 12 14 3.5 ± 1.0 12 15 3.6 ± 1.1 12 16 3.7 ± 1.2 12
High reliability of assessment of medical examiners has been shown by several researchers when rubric is introduced15,16. On the other hand the reliability has never been found to decrease when rubrics are used. Therefore, rubrics are being used by a lot of teachers on the assumption that grading objectivity is enhanced, especially regarding the performance of the students. This leads to the postulation that when rubrics are not used in assessment, there is more subjectivity because of the examiner's only subjective judgment of the performance of the students. Consequently teachers usually prefer to incorporate a rubric in all their assessments^17. But there are cases where inconsistent scores are produced even when rubrics are used due to many problems. Inter-rater reliability scores can be affected by many factors, including “the objectivity of the task/item/scoring,
the difficulty of the task/item, the group homogeneity of the examinees/raters, speediness, number of tasks/items/raters, and the domain coverage”. Poor reliability of the raters has been seen when there is poor training of raters, insufficient detail in the rubric, or "failure of the examiners to internalize the rubrics"^18. Raters with diverse levels of scoring capacity do not look at different results or performance features, but their understanding about the criteria of scoring has many levels^19. Injustice and bias is removed in assessments by using rubrics because criteria for scoring a student performance are clearly defined. The details given in the various score levels of the rubrics act as a guide in the process of evaluation. Designing a good rubric scoring can eliminate the occurrence of discrepancies between different raters^20. The reliability of scoring across students is enhanced by rubrics, along with the consistency between different raters. Another advantage of using a rubric is that a valid decision of performance assessment is achieved which is not possible with rating done conventionally. Complex competencies can be assessed according to the desired validity by using rubrics^21. In our study, the Cronbach’s alpha coefficient for 16 raters was found to be 0.972, showing that there is a relatively high internal consistency of the raters. Reliability coefficient of 0.70 or higher is considered "acceptable" in most research situations according to the institute for digital research and education UCLA- Los Angeles. D’Antoni et al; calculated inter rater reliability of 3 examiners that judged 66 first year medical students using MMAR(mind mapping assessment rubric) and calculated cronbachs’ alpha coefficient of 0.38^22. Fallatah et al assessed the reliability and validity of sixth year medical students at king Abdulaziz University by four examiners (2 seniors and 2 juniors) and Internal-consistency reliabilities for the total assessment scores were calculated. Cronbachs’ alpha for the four parts of the total assessment score on both long and short cases (2012) or OSCE (2013) was 0. and 0. 83 for 2012 and 2013^23. Daniel et al studied inter-rater reliability in evaluating the micro surgical skills of ophthalmology residents and alpha Cronbachs’ found to be 0.72^24. Golnik et al observed that Ophthalmic Clinical Evaluation Exercise (OCEX) is a reliable tool for the faculty to assess clinical competency of residents, alpha Cronbachs’ reliability coefficient was 0.81^25.
ANAM ARSHAD, et al
Rubrics are effective in achieving a high inter rater reliability in mini-CEX and make it a very useful tool in assessment of clinical skills.
Author's Affiliation
Dr. Anam Arshad Postgraduate Trainee, Postgraduate Medical Institute, Lahore.
Prof. Muhammad Moin Prof of Ophthalmology, Postgraduate Medical Institute Lahore.
Dr. Lubna Siddiq Senior Registrar, Department of Ophthalmology, Postgraduate Medical Institute Lahore.
Role of Authors
Dr. Anam Arshad Collection of Data and manuscript writing.
Prof. Muhammad Moin Study Design, Manuscript Review.
Dr. Lubna Siddiq Statistical Analysis.
assessing clinical performance of international medical graduates. Med J Aust. 2008 ; 189 : 159 – 161.