Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & EMBASE  
     Home | Subscribe | Feedback  

LETTER
[Download PDF
 
Year : 2021  |  Volume : 67  |  Issue : 1  |  Page : 59-60  

Reply to Letter to Editor regarding the article, "The power of subjectivity in competency-based assessment"

A Virk1, A Joshi2, R Mahajan3, T Singh4,  
1 Adesh Medical College & Hospital, Shahabad (M), Haryana, India
2 Pramukhswami Medical College, Karamsad, Gujarat, India
3 Adesh Institute of Medical Sciences & Research, Bathinda, Punjab, India
4 SGRD Institute of Medical Sciences and Research, Amritsar, Punjab, India

Correspondence Address:
T Singh
SGRD Institute of Medical Sciences and Research, Amritsar, Punjab
India




How to cite this article:
Virk A, Joshi A, Mahajan R, Singh T. Reply to Letter to Editor regarding the article, "The power of subjectivity in competency-based assessment".J Postgrad Med 2021;67:59-60


How to cite this URL:
Virk A, Joshi A, Mahajan R, Singh T. Reply to Letter to Editor regarding the article, "The power of subjectivity in competency-based assessment". J Postgrad Med [serial online] 2021 [cited 2021 Apr 21 ];67:59-60
Available from: https://www.jpgmonline.com/text.asp?2021/67/1/59/308712


Full Text



We thank the learned responders for showing interest in our article.[1] It appears to us that the responders have totally missed the point that throughout the paper, issue of expert subjective judgments has been discussed in the context of performance assessments in competency-based curricula and not knowledge assessment. Once the paper is read in that perspective without quoting isolated statements out of context, all the leveled criticisms become redundant. However, we will still try to answer them.

By quoting an MCQ example, responders have tried to project our statement “Objective assessments generally use 'norm-referenced' approach without any specified criteria, while in subjective assessment the performance of students is generally assessed against a pre-determined criterion and thus follows 'criterion-referenced' approach” as misleading.

Most of this discussion chips in with performance assessments and therefore examples using MCQs are most inappropriate. It is reiterated here that every test item is developed by someone, often a subject expert, and represents a value judgment on what content should be tested and the best possible answer.[2] While one can say that criterion or norm referencing can be subjective or objective, what we have asserted is that subjective assessments use Criterion referenced approach as the performance of students is generally assessed against a predetermined criterion, and this statement is based on the accepted position in literature.[3] Moreover, point of contention here is not the use of norm or criterion reference for computation of results, but for applying these concepts at the time of decision-making. Once the assessment is over, the result can be computed in any way depending on the requirement; it's the “reference” at the point of decision-making that matters.

The responders have also objected to the statement “in a sense, objective assessment is comparable to 'cross-sectional' study while subjective assessment is 'longitudinal' in nature” and called it misleading. We have clarified in the paper itself that in subjective assessment opinions are built over a longitudinal period after supervision and observation in the light of complex patterns of data collected to reach a coherent collective decision. Subjective assessment is continuous in nature and has elements of formative assessment (of the article), thus making it comparable to a longitudinal study.On our statement, “Standard setting itself is a purely subjective decision,” the responders have cited a reference, to mislead the readers into believing that item response theory (IRT) is a standard setting framework, which it is not.

IRT refers to a class of psychometric measurement models used to estimate examinee ability on the trait being measured and the difficulty of the examination items on the same scale. It eliminates the confounding of test difficulty and student proficiency.[4] This measurement model is used for accurate test scoring and development and evaluation of test items. Even when using IRT-based standards to accommodate difficulty of items, it is again the panelists who rate the scores to set standards.[5] Hofstee's, modified Angoff's, Nedelsky's, or borderline group methods, which are commonly used for setting standards for assessment in medical education, are largely based on expert subjective judgment of panelists.[6] Although standards are subjective, they are not arbitrary and have to be defensible.

Our paper does not present any conflict or debate between subjectivity and objectivity. Rather, we are trying to get subjectivity its due, especially for competency-based assessments. This paper offers the possibility of accepting and acknowledging subjectivity as an inherent component for measuring performance which has often been ignored in pursuit of objectivity. Competence is a complex phenomenon and regardless of its definition in terms of traits (knowledge, skills, problem-solving skills, and attitudes) or competencies or competency domains, human judgment is crucial to interpreting assessment results. Expert subjective judgment is also needed to collate information across individual data points and when individual data points are information-rich, and contain qualitative information, simple quantitative aggregation is out of the question and resorting to expert judgment becomes imperative.[7] Even within conventional education and conventional assessment, the value of expert subjective decisions is being recognized as evidenced by increasing use of mini-Clinical Evaluation Exercise (m-CEX), Direct Observation of Procedural Skills (DOPS), Professionalism Mini Evaluation Exercise (P-MEX), and Multisource Feedback (MSF) amongst others, which are largely subjective tools. Though not used in the MCI model of curriculum, awarding an Entrustable Professional Activity (EPA) to a trainee is again highly subjective decision and is widely used wherever competency-based medical education is in use.

Programmatic assessment (PA) is well grounded in theoretical notions around assessment and is based on sound empirical research. Nevertheless, PA needs to be explored further as medical educators first need to identify ways to compile subjective data for comparisons across or against some standard for the purposes of high stakes decision-making. The most important prerequisite for this is the high validity and credibility of these assessments in order to actually support the learners in their learning process and the acquisition of competencies.[8] PA uses expert subjective judgment as one of its tenets and we as educators need to be clear about the role of subjectivity in performance assessment. Decision regarding promotion of a student, based on multiple data points (including those from highly objective tests), accumulated over a period is made by a committee and again depends heavily on subjective judgments. We feel that this could be one of the reasons for slow acceptance of PA, despite it being theoretically sound. Understanding this process may bring out opportunities to design and shape assessment practices in a meaningfully positive way for utilizing the power of expert subjectivity. Only then will the PA be possibly considered by the medical educationists for general use.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

References

1Virk A, Joshi A, Mahajan R, Singh T. The power of subjectivity in competency-based assessment. J Postgrad Med 2020;66:200-5.
2ten Cate O, Regehr G. The power of subjectivity in the assessment of medical trainees. Acad Med 2019;94:333-7.
3Keynan A, Friedman M, Benbassat J. Reliability of global rating scales in the asse?ssment of clinical competence of medical students. Med Educ 1987;21:477-81.
4Downing SM. Item response theory: Applications of modern test theory in medical education. Med Educ 2003;37:739-45.
5DeChamplain AF. Standard setting methods in medical education: High-stakes assessment. In: Swanwick T, Forrest K and O'Brian BC, editors. Understanding Medical Education: Evidence, Theory and Practice. 3rd ed. Oxford, Wiley; 2019. Pp 356.
6Ben-David MF. AMEE Guide No. 18: Standard setting in student assessment. Med Teach 2000;22:120-30.
7van der Vleuten CPM, Schuwirth LWT, Driessen W, Dijkstra J, Tigelaar D, Baartman IKJ, et al. A model for programmatic assessment fit for purpose. Med Teach 2012;34:205-14.
8Rotthoff T. Standing up for subjectivity in the assessment of competencies. GMS J Med Educ2018;35;Doc29. doi: 10.3205/zma001175.

 
Wednesday, April 21, 2021
 Site Map | Home | Contact Us | Feedback | Copyright  and disclaimer