|
|
Assessment in competency-based medical education : A paradigm shift NN RegeProfessor Emeritus, Department of Pharmacology and Therapeutics, Seth GS Medical College and KEM Hospital, Mumbai, Maharashtra, India
Correspondence Address: Source of Support: None, Conflict of Interest: None DOI: 10.4103/jpgm.JPGM_1182_20
There will be no two opinions if I say that a health care practitioner has to be competent to provide health care and should also provide it with compassion. In all societies, the general expectations from physicians have remained universal. The physicians should be trustworthy, respect patients, give appropriate advice, and communicate effectively with patients, their families, and members of the community. They should remain abreast with newer developments in the field and use them judiciously for improving quality of care. They should be an ethical, responsible, and accountable professionals.[1],[2] These and many more attributes have emerged since ancient times and are mentioned in various systems of Medicine. For example, the ancient granthas of Ayurveda, Charaksamhita, and Sushrutsmahita have a detailed elaboration about “vaidyagunas”,[3],[4] which are not different than what we expect today. In any medical curriculum, these attributes are specified upfront and course elements are selected in such a way that the students acquire these attributes, as they progress through the course. The question is whether they are assessed at the time of graduation to ensure that the graduating students are ready to serve the society. Are the graduates fit for the purpose? Unfortunately, not all. If we look at all the past assessments, we realize that it is the knowledge component that gets assessed maximally; next comes the skills and the domain of attitudes is often overlooked[5]; the main reason being it is easy to assess knowledge. Many assessment tools available to test knowledge viz. long essay questions, short notes, multiple-choice questions (MCQs), are well familiar with, well standardized, and analyzed using psychometry. The skills assessed related to clinical history taking or examination of patients or to some extent, communication are broken down into elements, which can be assessed using OSCE. But the remaining complex skills or attitudes, viz. teamwork skills, humanistic qualities affecting patient care, etc., are not assessed at all. Thus, there is a mismatch between the assessment and the expected outcomes. The graduates may have requisite knowledge but not the other attributes expected from a doctor. Why are these qualities not assessed? One of the reasons is no tools are available to assess them objectively. Our focus has always remained on bringing as much objectivity as possible in the assessment. This is mainly to avoid bias in decision making while assessing students. Expert judgement is viewed negatively considering there can be an element of bias.[6] However, in last two decades one can observe that changes are taking place slowly. There is a realization that competence is contextual, develops with experience, and varies over a time.[6] Assessment tools have been developed viz. mini-clinical evaluation exercise (miniCEX), direct observation of procedural skills (DOPS), case based discussions (CBD), multisource feedback (MSF), mini-peer assessment tool (mPAT), which are based on an expert subjective judgement utilizing global rating scales,[7],[8] and are widely used in clinical settings for workplace-based assessment. It is also getting recognized that in the 21st century, systems-based approach needs to be adopted in education to develop physicians who have leadership attributes, skills to work in interprofessional teams, ability to use information technology and values with social accountability at the core.[9] These attributes need integration of multiple competencies, which vary depending on diversity and complexity of clinical scenarios. It has been proposed that, rather than using a reductionist approach of assessing an individual competency, evidence gathering regarding level of performance of a specific professional task using multiple methods and adequate sampling is a better approach. Narrative descriptions given by the assessors about the same serves as an effective feedback to the learner to improve the level. Such collective perspectives increase reliability of assessment of these integrated competencies.[10] So far, we, the Indian faculty members, were following the traditional system of assessment. But now with the introduction of the new competency-based medical education (CBME), there is a major change in our curriculum. The Medical Council of India (MCI) has specified roles for the graduate, which include clinician, communicator, leader and member of a team, life-long learner and professional and also the competencies related to these roles.[11] As all the competencies cannot be mastered overnight or over a short period, the progress of the student throughout the course needs to be monitored to assess the attainment of competencies. Thus, assessment has to be continuous.[12] Formative assessment with feedback is an integral part of CBME.[13] In short, we no longer can rely on the present structure of term ending examinations, prelims and one final University examination. We need to have a plan of assessment, may be as a matrix in which each assessment activity can track the given competency.[13] As the competencies are developed over a period, the assessment of process of learning is very vital[12] and can provide idea about how the student is accepting the responsibility of his/her own learning. Self-assessment, peer assessment, direct observation and feedback from the teacher all serve as valuable methods that can shade light on the learning process. Thus rather than individual assessor, the assessment should rely on the wisdom of multiple assessors of different categories.[12] All these assessments are assessments for learning or the formative assessment, which have been emphasized by MCI in its assessment module.[14] MCI has made a special effort listing down what needs to be assessed apart from knowledge competencies and that includes various skills like clinical reasoning, procedural, communication, self-directed learning, reflections by the students, individual and group activities of foundation course, early clinical exposure, electives and the Attitude, Ethics and Communication (AETCOM) module. In addition, various learning opportunities in the class, clinical facility (wards or out-patient departments), or community should be looked at as a potential source of assessment.[14] We need to assess team work skills and leadership skills in the context of different educational settings. This means we need more assessment tools in our tool kit. These tools should be easy to apply considering number of assessment opportunities, should provide quicker decisions to take remedial action, should be feasible and should also have enough rigor to be acceptable. So, we need to adopt non-traditional options as well.[15] In this endeavor, the article published by Virk, et al., entitled, “The power of subjectivity in competency-based assessment”[16] under the Educational forum of the current issue will be of tremendous help to the faculty. This article[16] clears in the beginning certain misconceptions regarding core principles of assessment. It then unfolds before us how expert subjective opinions form the foundation of many elements like checklist marking, blueprint preparation, standard setting, etc., which are used to build up objectivity in the assessment. It makes the case stronger in favor of expert subjective opinions explaining their strengths and by comparing their characteristics with those of competency-based assessment. It provides, in a nutshell, areas wherein we can use subjective assessment. Though these points start convincing the reader about the value of subjective assessment, the authors have not forgotten to warn that the challenge lies in the rigor and acceptability. They focus on strategies that can build up rigor. These words of caution reassure the readers who believe in objectivity. The authors have discussed one more tool that can overcome the fallacies of rating scales or global description, and that is rubric. As stated by the authors in the article,[16] “it is a type of scoring tool that contains criteria for performance with descriptions of levels of performance that can be used for performance assessments.” After clarifying the concept of rubric, the authors have given various examples wherein rubrics have been used successfully viz. clinical skills, surgical procedures, critical thinking, reflective writing, communication skills and so on. There are sufficient references provided for the reader who desire further in-depth reading. But the best part of the article lies in the insight it provides to the reader regarding constructing a rubric. There is a proverb, “Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.” Thus, not just the description of rubrics, the article gives the readers a direction for its development. Development of tools for subjective assessment can help in making reliable decisions about progress of the learner during training or of the graduates' fitness to practice.[11] Remember, it is not an easy task and needs validation but this opens up an opportunity for readers to make an attempt for those competencies, for which validated rubrics are not available. This article apart from building the theoretical base, provides ready to use information to motivate one to apply it into the practice. It is now the turn of the readers of this article to use the subjective assessment judiciously while assessing the students taking due care of rigor and then share the experiences for the benefit of others.
|
|
|||||||