|
|
The power of subjectivity in competency-based assessment A Virk1, A Joshi2, R Mahajan3, T Singh41 Adesh Medical College & Hospital, Shahabad (M), Haryana, India 2 Pramukhswami Medical College, Karamsad, Gujarat, India 3 Adesh Institute of Medical Sciences & Research, Bathinda, Punjab, India 4 SGRD Institute of Medical Sciences and Research, Amritsar, Punjab, India
Correspondence Address: Source of Support: None, Conflict of Interest: None DOI: 10.4103/jpgm.JPGM_591_20
Keywords: Competency-Based Medical Education, objectivity, reliability, rubrics, validity
It is generally acknowledged that assessment drives learning; however, assessment can have both intended and unintended consequences.[1] What and how students learn depends largely on how they think they will be assessed; hence, assessment should become a strategic tool for enhancing teaching and learning in higher education.[2] Competency-Based Medical Education (CBME) entails the attainment of observable abilities by students in a learner centered manner with emphasis on outcomes considered relevant to the daily practice of medicine. Much-awaited implementation of competency-based Indian undergraduate medical curriculum has commenced from session 2019. This paradigm change has shifted the focus towards application of knowledge in real life situations. Adoption of CBME speaks of a trend in medical education towards framing assessment around general competencies thus reflecting a larger, more comprehensive idea of physician competence as seen through the eyes of the society.[3] This shift towards CBME has been a challenge for medical educators for developing new methods of teaching and assessing clinical and professional competence.[4] CBME has been implemented in India at a stage when medical educationists had just begun to understand the use of objectivity and standardized assessment methods. This pursuit for objectivity was also accompanied by a notion that every assessment should be objective for it to be of any value. The tilt towards objectivity in medical education was largely supported by the notion that professional roles can be broken down and operationalized into individual elements of defined knowledge or skills, which even when acquired and assessed independently, and when aggregated can lead to assessment of overall professional competence.[5] In CBME, learning process is the most crucial focus. Focused approach through reflection and exploration of the learning process with feedback can help develop ability in the sense of performance. Competency-based assessment should also aim to make summative decisions on competence or or lack of it for assessment of learning; thus, mandating the tapping of full potential of “expert subjective judgment” for learners' longitudinal and monitored development. This article attempts to purposefully and critically discuss the role of subjectivity and expert judgement as an indispensable integrant in assessment in the context of the recent implementation of CBME in Indian undergraduate medical education.
Presently, it does make one wonder if standardized assessment methods will guarantee similar performance in a real-life situation, inclusive of the variability of actual clinical practice. Medical care demands that the medical graduates should possess the ability to integrate different competencies for optimal patient care.[6] Can entire competence be considered a stacking of individually completed tasks in context to knowledge, skills, attitudes, and communication? While many clinical tasks can be broken down into a sequence of standardized steps to follow, it is not always possible to break complex skills such as team-collaboration, professionalism, communication, etc., and thus undergo a process of longitudinal development.[7] These skills can be best assessed through direct observation in real-life settings, under non-standardized (or actual) conditions, in which professional, expert subjective judgement becomes imperative.[8] Moreover, once these skills are internalized and process becomes automated, graduates actually learn to skip many steps during diagnosis or treatment, rendering check-list based, objective assessments redundant. There is nothing completely objective in any assessment; all assessments are guided by the attitudes, values, orientation, and prior experience of the assessor. The checklists drafted for assessment in objectively oriented assessments are largely influenced by subjective opinions. While marking checklists for steps performed or tasks accomplished, no one is going to stop any assessor for marking it wrongly, if the assessor has decided beforehand to be “biased”. Lastly, many steps mentioned in checklists often involve building expert subjective opinions before marking. The process of blue-printing though introduced to improve the validity of an assessment, again involves expert subjective judgements right from assigning a value of clinical relevance or impact or frequency to drafting of items and keys and distracters for the items.[9] Standard setting itself is a purely experts' subjective decision. Lowering the criteria, if enough candidates have not been able to meet the cut off, like in post-graduate entrance examinations, is not based on any objective method and is a purely subjective decision. What we actually do is to make subjective decisions but try to measure them objectively, a process named objectification by Vlueten, which does not give better results compared to expert subjective opinion.[10] Performers' variability and heterogeneity needs to be given its due space while assessing clinical competence and as such an inflexible, objective assessment is bound to lose its utility in competency-based assessment. Accordingly, medical educationists have justifiably cautioned against objectifying competency-based assessment.[9],[10],[11] This leaves us with an option of exploring the possibility of applying expert subjective judgement in competency-based curriculum. Before building our case for the importance and power of subjectivity in competency-based assessment, let's discuss briefly the differentiating points between objective and subjective assessment.
Objective assessment implies the use of information that is collected through measuring, observing and examining facts, while subjective assessment implies information primarily based on personal views, opinions or value judgements. Expert subjective opinion can rate performance at higher levels of simulation and is often flexible, involves less time, effort, and cost. Such expert subjective judgement can add a consequential flavor to student assessment and is indispensable for effective assessment. The most important point in favor of subjectivity remains that it offers immense opportunity for feedback to the students.[12] Objective assessments generally use “norm-referenced” approach without any specified criteria, while in subjective assessment the performance of students is generally assessed against a pre-determined criterion and thus follows “criterion-referenced” approach. Objective assessment provides an opportunity to have wide range of sampling of curriculum in a single assessment, within small timeframe. In contrast, expert subjective assessment is often based upon rating the over-all performance at a higher level of simulation over an extended period of observation.[12] In a sense, objective assessment is comparable to “cross-sectional” study while subjective assessment is “longitudinal” in nature. Objective assessment is often overvalued. The notion of objectivity, even if not entirely wrong, is not flawless. Dividing the whole activity into smaller steps and then assessing objectively, based upon the task performed stepwise and marking the assessed as competent, may be a workable approach in some tasks, but not in all. Consider de-assembling a cycle into various components and then assembling it again—you can do it even after de-assembling the cycle to the last nut-and-bolt; but you can't de-assemble a frog into component parts and then assemble it back to a frog![13] Clinical competence has many tasks which are more than mere stacking of knowledge and skills. On the other hand, subjective assessment is often undervalued by suggesting that it is based upon whims and fancies of the assessor. This is not true. For any given task, the internal consistency of scores of subjective ratings may be low, but it always shows a higher consistency across the tasks compared to objective assessment.[14] Additional advantage is its continuous, longitudinal nature providing immense feedback opportunities. The utility of observation based and considered, expert subjective judgement as part of assessment in competency-based assessment can neither be ignored nor challenged.
Some of the inherent characteristics of competency-based assessment are its continuous nature, based upon direct observation over a period of time, focusing on criterion-referenced approach, and having plenty of feedback opportunities for midcourse correction. Let's have a relook on some of the characteristics of expert subjective assessment, as delineated in sections above. Expert subjective assessment is criterion-referenced based, and opinions are built over period after supervision and observation. That also gives us many formative assessment opportunities [Figure 1]. Both “subjective assessment” and “competency-based assessment” synchronize well in terms of their approach.
Competency-based assessment implies that the domains which have not been assessed till now in medical graduates for want of objective methods like empathy, professionalism, ethics, and other soft-skills should also be assessed. Expert subjective judgement with global ratings provides an opportunity to assess them too. In fact, subjective assessment is the key to assessing professional competence, particularly in areas such as teamwork, professionalism, communication, criticality, reflexivity, and ethics, and can't merely be discarded by tagging it as “biased”. These values underpin the intrinsic nature of medical profession and demand subjective judgement that goes beyond technical proficiency.[12] Use of multiple assessors in subjective assessment provides judgement as well as feedback from multiple experts. This not only makes the feedback richer but also increases the reliability of the subjective assessment.[9] Subjectivity also provides enough room for work-place based assessment and that in turn empowers the competency-based assessment. The areas authorizing use of subjective assessment and its potential utilization in student's assessment and monitoring longitudinal professional development have been detailed in Box 1 (the list is not exhaustive).[4] Box 1: Potential utilization of subjective assessment
Subjective assessment ratings are easy to build, not resource intensive and effective cost involved is much lesser compared to objective assessment. Medical education, both at undergraduate and post-graduate level can get enriched with the use of subjective assessment particularly after the introduction of competency-based curriculum. Though the power of the subjectivity in competency-based assessment can't be denied, the real challenge lies in improving its rigor and acceptability.
Miller pyramid provides a practical framework to understand assessment at different levels of professional competence. Assessment at the lower levels (predominantly directed at knowledge, application of knowledge and demonstration of skills) is more or less accepted as “established,” while assessment at the highest or “does” level (predominantly directed towards direct observation in the workplace) is still “evolving”. Assessment of competence is based upon expert observation and subjective judgement to a large extent. The value of subjective assessment using expert judgement can be improved by—including multiple contexts and assessors, because many subjective judgements help in drawing a firm inference on the aggregated results; by triangulation and saturation of information thus providing the direction for collective decision making; and by using bias-reduction techniques that throw light on the process of decision making.[15] Some other modalities that can help build rigor in subjective assessment have been listed in Box 2. Box 2: Modalities to build rigor in subjective assessment
Workplace-Based Assessment (WPBA) refers to a group of assessment techniques that assess students' performance in clinical settings.[16] The sheer strength of WPBA is its observation-based formative potential allowing the learner to navigate her learning towards the desired learning objectives.[17] WPBA tools involve subjective assessment in one or the other form, are more reliable, and due to their features are highly acceptable. Interaction with other members of the health team is also best assessed at the workplace. This interaction fosters development in the difficult to define, domain-independent competencies, such as professionalism and communication skills. These competencies can gain much from direct observation and enriched feedback. It also offers opportunities to observe and assess the student on a day-to-day basis with less chances of allowing students to mask their behavior. In comparison to traditional assessments which are often opportunistic with non-representative sampling, direct observation allows for a better sampling of work that physicians actually do rather than assessing them on patients whom they are unlikely to handle professionally. Trust and confidentiality are vital to the success of multi-source feedback. It can be said that feedback is based on assessment and student-teacher interaction promotes effective feedback. Studies have established that narrative, descriptive, and linguistic information is often much richer and more appreciated by learners.[18] Where number scores may fail to divulge what the learner actually did and what she should do to improve, a qualitatively meaningful feedback can strengthen the process of assessment manifold. We need to contemplate carefully about the ways in which subjective judgements should be secured, analyzed, and aggregated. Also, educators must search for ways to put together subjective data comparable between experts or against a standard for the purpose of high-stake decision making. One of the ways is by designing rubrics for such expert judgement-based assessments.
In graduate medical education, global ratings of resident physicians by faculty is one of the most widely used method of assessment. Yet research regarding such rating forms, shows wide variability in validity and reliability. Likert-type rating-scale assessments that consist of numeric ratings, when accompanied by qualitative labels, such as competent or not competent, often yield scores that are subjectively derived with limited value in formative assessment because they lack detailed requirements of performance expectations and behavioral descriptions for each domain.[19],[20] Rubric, with its detailed qualitative description about each rating solves the problem. Rubric refers to “performance standard” for a student population. It is a type of scoring tool that contains criteria for performance with descriptions of levels of performance that can be used for performance assessments. Performance tests are generally used to determine if a learner has mastered specific skills, and the instructor typically makes inferences about the level to which the skill has been mastered. Such learner centric assessment tools are meant to augment performance while enriching education through experience, providing a constant review of results achieved vis-a-vis the outcome desired. The rubric serves as a tool to measure the equivalence, educational effect, catalytic effect, validity evidence, and acceptability of an assessment, along with importance weightings for each item.
Multi-faceted qualitative rubric assessments work best for assessing various professional competencies. Competency gaps have been documented at every transition from undergraduate to graduate to postgraduate environments. As a result, assessment based on day-to-day activity cannot be ignored. Assessment needs to be a part and parcel of instruction rather than being an appendage of the process.[21] The main issue is whether the standardization and objectivity of evaluations are reliably maintained in a complex, simulated, clinically relevant and a contextually appropriate setting.[22] Using Rubrics in day to day assessments can help extract pointed information about student understanding at the workplace. Rubrics emphasize the use of experts to evaluate performance assessment, of the complex multi-faceted characteristics of the tasks undertaken.[22] Demonstration of good clinical reasoning and reflection, communication, consulting skills, good team working and continuing development are some of the important attributes of professional competencies, which cannot be adequately gathered through objective tools or check lists only. The evidence-set appropriate for the demonstration of competence therefore needs to be broader and qualitatively much more developed than might be supposed.[23] Outcome based assessment rubric is a novel systematic instrument for documenting improvement in clinical learning.[24] Clinical evaluation remains challenging to even the most seasoned faculty and rubrics is one such tool that can provide a learner-centred assessment approach that focuses on encouraging behavioural change in learners besides improving the value and power of the subjective assessment. Evidence suggests that focusing more on learner-centric assessments tends to specifically help low-achieving students, thereby improving the overall learning. The literature suggests that rubrics are being used widely in the field of medicine, if not by a large number of instructors.[25] It is one of the effective methods for formalizing and making a subjective assessment more reliable across different assessors.[22],[24] Educators from different parts of the world are supporting the use of rubrics as a tool for subjective assessment of - clinical skills, clinical reasoning, performance based assessment, surgical procedures, measuring alignment of entrustable professional activity, surgical competency, measurement of uterine compression sutures technical skills, reflective writing, critical thinking, communication skills, and interprofessional skills. An elaboration on how one can use rubric in a clinical setting can be retrieved from https://medschool.ucla.edu/workfiles/site-current/policies/ClinicalGradingRubric201819.pdf. Herein the rubric grid demonstrates a range of professional competencies like history taking skills, physical examination, communication skill, professionalism and ethics. These competencies are described in the form of individual criteria. Each criterion mentioned has a specific significance and a description in accordance with level of achievement, thus helping the learners to be more pragmatic about their professional skills. For an example, in evaluating a competency in history taking skills, level 1 depicts that the trainee often misses key information, level 2 depicts that learner is able to gather a complete medical history, level 3 means he consistently gathers a complete and accurate history and finally level 4 describes that he excels in gathering a complete, accurate and relevant history. Description is there under each level for easy comprehension. Rubrics can be used as vital tools in CBME for improvement in assessment, feedback and learning and to solve the problems of subjectivity in assessment. In addition, they also allow coherence and consistency in assessment. A properly devised rubric can help students to recognize and assess their strengths and weaknesses. The characteristics of a good, effective rubric recommended for subjective judgement in medical education have been detailed in Box 3. Box 3 Characteristics of good, effective rubric A good, effective rubric must:
Prior to developing a rubric, one needs to write clearly the description of the procedure or skill to be assessed. This description may be incorporated into the rubric itself or exist as a separate document. Beyond this, a rubric generally consists of three main levels, which compose a rubric grid[26],[27] [Figure 2].
This description defines the behaviours that differentiate between the “excellent” level versus the “needs improvement” level versus the “critical error” level. When developing the rubric, it is imperative to decide how many performance levels are adequate for each skill in the scale.
With the introduction of CBME, medical education has entered a 'post-psychometric' phase of assessment, with very diverse domains of expertise. A renewed interest in the revival of subjective judgement in assessment puts forth some interesting questions about what judgement is and how to rationally aggregate multiple judgements without compromising on the abundant expert perspectives. Studies have shown that subjective assessment can be used with some degree of objectivity for continuous assessment.[28] Medical Council of India (MCI) module on assessment for undergraduate medical education lays stress on ongoing developmental feedback, direct observation, multiple assessors and use of multiple tools for students' assessment under competency based curriculum, while advocating the need to implement low-stake assessment system at the institutional level.[29] The same has been emphasized in literature.[4],[12],[30] The big challenge that lies ahead is to understand how subjectivity in assessment can be reintroduced while retaining 'rigour' in assessment. As argued by Hodges, clinical assessment of students can be compared to clinical judgement, “With experience, expert clinicians become more rapid and more accurate in their recognition of patterns. There is no reason to believe that this process does not operate in education”.[31]
Rotthoff' highlighted the role of subjectivity in the assessment of competencies by stating, “we run the risk of investing our resources in the best possible standardization of exams or perfecting checklists and scales in context to outcome based CBME rather than focussing on importance of 'learning process' which is crucial for Competency- Based Education”.[32] What we need to do is to look for the ways and means of improving acceptability of expert subjective assessment method, which is valid given the context, which is feasible and quicker, which is based upon observation and which has high educational impact. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
[Figure 1], [Figure 2]
|
|
|||||||