|
|
Assessment toolbox for Indian medical graduate competencies T Singh1, S Saiyad2, A Virk3, J Kalra4, R Mahajan51 Medical Education Unit, Sri Guru Ram Das University of Health Sciences, Amritsar, Punjab, India 2 Department of Physiology, Smt. NHL Municipal Medical College, Ahmedabad, Gujarat, India 3 Department of Community Medicine, Adesh Medical College and Hospital, Kurukshetra, Haryana, India 4 Department of Pharmacology, Himalayan Institute of Medical Sciences, Dehradun, Uttarakhand, India 5 Department of Pharmacology, Adesh Institute of Medical Sciences and Research, Bathinda, Punjab, India
Correspondence Address: Source of Support: None, Conflict of Interest: None DOI: 10.4103/jpgm.JPGM_1260_20
Keywords: Assessment, assessment toolbox, competency, competency-based medical education, feedback, global competencies, Indian medical graduate
Medical Council of India (MCI) adopted the competency-based medical education (CBME) for the training of medical undergraduates in India for the academic session 2019.[1] This adoption is achieved after extensive exercise of faculty development and capacity building through training of the medical faculty in basic course, advance course, curriculum implementation support program (CISP), framing of draft guidelines, and rectifying those guidelines after placing them in public domain. The Indian Medical Graduate (IMG) has been defined as, “a graduate possessing requisite knowledge, skills, attitudes, values and responsiveness, so that she or he may functionappropriately and effectively as a physician of first contact of the community while being globally relevant”, and the new curriculum is an effort to ensure that every graduate passing out of medical colleges competent to perform these roles. A set of global competencies, besides subject-specific competencies, have been documented for IMG for fulfilling five roles of IMG viz., clinician, leader, communicator, life-long leaner, and professional. For a better contextual understanding, it may be worthwhile to look at the definition of competency, which means “habitual, consistent and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflections in daily practice for the benefit of the individual being served”.[2] A perusal of this definition makes it clear that competency-based assessment has to consider many attributes other than knowledge and skills and also their application in a consistent and habitual manner. Continuous mentoring, feedback, and self-reflection in a learning-oriented educational environment, besides many other factors, eventually will lead to professional development of a student into an IMG [Figure 1].
What is going to change for undergraduate training under CBME? Assessment is going to see a paradigm shift, not only in the nature of tools used, but also in the ways the assessment will be used, and inferences which will be drawn from it. Not only knowledge and skills but also attributes like clinical reasoning, emotions, values, and reflections need to be assessed. Assessment will have to be designed in a way to create more of learning opportunities and to contribute towards professional development of the student. It needs to be fortified by continuous feedback to the student. Though MCI has delineated certain specifications for assessing subject-specific competencies, no such framework has been provided for global competencies. In this article, we have tried to develop an assessment toolbox for assessment of 35 global competencies of the IMG.
Assessment toolbox borrows largely from the example of a mechanic who always carries his full toolbox, though all the tools are not needed during every repair work. Essentially, a toolbox has the following characteristics:
Let's illustrate the concept of toolbox by taking an example. Suppose the bulb in your room is not working and you call an electrician. He will come to your house with his toolbox, which will not only have a spare bulb but will also have wires, screwdriver, hammer, pliers, and even a drill machine. Chances are that he will simply replace the bulb. However, if he finds that the wires are also loose, he will use the screwdriver to tighten them. Just in case, the holder is coming off the wall, he may use the hammer and pliers also to fix it. In some other situations, he may also need to drill a hole in the wall to make the fixtures sturdy. Assessment toolbox works the same way. In most situations, the first mentioned tool can be used but, if needed, even a second or third may have to be used. For example, if after using a long case, it is felt that the student has issues with eliciting physics signs, he can be given an objective structured clinical examination (OSCE) for identification and remediation of specific issues.[3] Similarly, if communication is identified as a problem, mini peer assessment tool (mPAT) can be used for more specific information and feedback. This provides flexibility in using tools depending on the needs and requirements, by encouraging use of multiple tools for one competency and assessing multiple competencies by one tool. It is thus a move away from “one tool-one competency” model, commonly prevalent in our setting. In assessment toolbox explained in next sections, prioritization has been done for tools in context of different global competencies. These “rankings” represent the “expert subjective judgment” of the authors, reached after thorough discussion and literature review. The “ranking” of the tools does not indicate superiority or inferiority of any tool. It only indicates desirability and utility and provides flexibility in case a tool is not usable for any reason. From that perspective, the “rankings” are more nominal than ordinal. Fallout of this approach is to build capacity of the assessors to use all the tools. Actual use will of course be decided by the context, assessment literacy, experience, and expertise of the assessor. World over, competency curricula generally suggest a toolbox from which the assessors can pick and choose the tools.[4]
Data were collected by using “assessment of core competencies,” “assessment of professionalism,” assessment of communication skills,” “assessment of competencies,” “workplace based assessment,” “direct observation of procedural skills” as search keywords in English language on PubMed, Google scholar, and Science Direct search engines and by searching reference-list of extracted articles (reference of reference). Limitations applied were—articles after 1990, English language, and studies in medical education. Full papers relevant to our objectives were used for this review. This resulted in selection of 3,637 articles. The list was scaled down to 232 after removing duplicates and screening for assessment of global competencies and its attributes. Finally, 31 articles were used for compiling the proposed toolbox, after ascertaining scientific value of the papers and excluding articles targeting same assessment tool/method [Figure 2].
Regulations on Graduate Medical Education (2019) contemplate that the IMG should exhibit 35 core competencies at the time of graduation to fulfill the five roles. Clinical competence is a complex construct requiring multiple tools for its assessment. Common assessment tools that can be used for assessment of various competencies have been listed in [Table 1]. A variety of assessment tools are available, each having its strengths and weaknesses. Hence, multiple assessment tools are recommended for assessment of clinical competence, not only to compensate the weaknesses but also to complement and supplement the strengths of various tools. The concept of utility of assessment described by van der Vleuten—validity, reliability, acceptability, educational impact, and acceptability[5] can be used to suggest appropriate tools.
The tools suggested for assessment range from the commonly used ones like multiple choice questions (MCQs), to relatively uncommon ones like m-CEX to entirely unheard ones like script concordance test (SCT). Though no single tool can be labeled as complete, but a combination can address most global competencies. For example, OSCE, BPE, SP, DOPS, and tools based on rating scales like m-CEX, test clinical skills very well along with communication skills, professionalism, and student's leadership qualities, as and when applicable. Those stimulating reasoning, –such as SCT, case-based discussion (CBDs), may stimulate higher thinking, and self-directed learning; thus motivating students towards becoming life-long learners in a very subtle manner. Multi source feedback (MSF), mini peer assessment tool (mPAT), and patient survey (PS) can be used for receiving feedback regarding professional behavior, leadership, and communication. Long- and short cases can assess most global competencies very well as they are rather holistic, and they require a strong knowledge base, involve assessment of clinical skills as well as communication skills, but in case of specific issues may need to be supplemented as explained above. It may be mentioned that the psychometric characteristics of most tools are dependent on the way they are used, rather than being their innate property.[6] Many of the common tools like MCQs, short answer questions (SAQs), essay-type questions (EAQs), OSCE, MSF, mPAT, standardized patient (SP), global rating, check-lists, key feature test (KFT) have been described earlier, along with their strengths and limitations.[7],[8] Some other tools are being described here briefly. Long case has been purposely included in the description because of certain issues around its use. Script Concordance Test (SCT) Script concordance test (SCT) is a relatively newer assessment tool designed to reflect students' competence in interpreting clinical data under ambiguous or uncertain conditions and aims to simulate authentic conditions of medical practice. SCTs are comprised of a sequence of short clinical scenarios/cases/vignettes, which are followed by a set of questions having three parts. For each question, the first part (”If you were thinking of…”) provides a hypothesis in the form of a diagnostic, therapeutic or a prognostic or bioethical consideration; the second part (”…and then you find…”) presents additional information, such as a physical examination finding, a pre-existing condition, an imaging study or a laboratory test result, that may (or may not) have an effect on the given option. The question is answered in the third part (”…this hypothesis becomes:”), which contains a 5-point Likert-type response scale (ranging from -2 to + 2). Students are required to indicate on the Likert scale what they think would be the possible effect of the new information (part 2) on the proposed hypothesis (part 1).[9] The SCT format is developed to test clinical reasoning in uncertain situations and is based on “the principle that the multiple judgments made in these clinical reasoning processes can be probed and their concordance with those of a panel of reference experts can be measured”.[10] Short case Short cases are used to assess clinical competence. Students are asked to perform a supervised specific physical examination of a real patient, and are then assessed on the examination technique, the ability to elicit physical signs and interpret their findings in a correct manner. In order to increase the sample size and reliability, several cases may be used in any one assessment.[11] Long case Traditionally, the students are allotted a long case in which they elicit history and examine a real patient for an uninterrupted and unobserved time, ranging from 30 to 45 minutes. The students then summarize their findings and plan to the examiners who follow it up with an unstructured oral examination about the patient's problem and relevant topics. The long case has been in use in both summative as well as formative examinations on account of its perceived educational impact.[12] Various modifications have been introduced to build validity and reliability into the long case format as an assessment tool such as, observing the students during history taking and conducting the physical examination, training the examiners to structure the oral examination process, and increasing the number of cases seen by the student.[13] A more structured presentation of an unobserved long case was developed by Gleeson, the objective-structured long examination record (OSLER), which includes a structured format of the long case with direct observation of the student while interacting with the patient. OSLER is a powerful tool for providing feedback and thus has tremendous potential to increase clinical competence.[14] It may be pertinent to elaborate about the long case—which forms the mainstay of clinical assessment in India—as an assessment tool. For last many years now, the long case doesn't find a mention in contemporary assessment literature and seems to have been written off,[15] for various reasons. Even add-ons like OSLER didn't improve its acceptance. To be fair, the long case resembles the actual clinical encounter much better than say OSCE[4]; however, to achieve an acceptable reliability for summative decisions, minimum 10 cases and 20 examiners may be needed.[16] We, therefore, need to make deliberate effort to overcome some of its limitations and supplement it with other methods like OSCE and m-CEX. Using some time to observe the student taking history or performing physical examination—something like an “extended” OSCE—can add to the assessment information. The MCI module on assessment has also made that suggestion.[17] Chart Stimulated Recall Oral Examination (CSR)/Case-Based Discussion (CBD) In a chart-stimulated recall (CSR) examination students are assessed on clinical cases by a structured and standardized oral examination. Examiner asks questions to the student relating to the care provided while probing for justification behind the case work-up, diagnosis, interpretation of clinical findings, and case management plans. Each examination encounter is expected to last about 20 minutes, including 5 minutes of feedback. In some settings, CSRs are also called case-based discussions (CBDs). CSRs promote autonomy and self-directed learning as they follow a “one to one” discussion and reflection format.[18] Direct observation of procedural skills (DOPS) DOPS is a structured rating scale that ensures that students are given specific feedback based on direct observation so as to improve their procedural skills. Commonly performed procedures for which students are expected to demonstrate competence including endotracheal intubation, nasogastric tube insertion, administration of intravenous medication, venepuncture, peripheral venous cannulation, and arterial blood sampling can be assessed using DOPS. These are best assessed by multiple clinicians on multiple occasions throughout the training period.[19] Mini Clinical Evaluation Exercises (m-CEX) Mini clinical evaluation exercise (m-CEX) requires students to interact with patients in authentic workplace-based patient encounters while being observed by faculty members. Students perform clinical activities, such as eliciting a focused history or performing physical examination, after which they summarize the patient encounter along with next steps (e.g., a clinical diagnosis and a management plan). Each aspect of the clinical encounter is scored by a faculty member using a 9–point rating scale.[20] The m-CEX is mostly used for formative purposes.[21] Professionalism mini clinical evaluation exercise (P-MEX) P-MEX is a modified version of m-CEX, specially assessing professionalism. It is a structured observation tool containing 24 items, which are rated on a 4-point scale. Multiple observations are carried out and results are discussed with the student. Its reliability and validity have been reported as acceptable.[22] Clinical Encounter Cards (CEC) The CEC system is quite similar to the mini-CEX. It assesses and scores dimensions of observed clinical practice such as: history-taking, physical examination, professional behavior, technical skill, case presentation, problem formulation (diagnosis), and problem-solving (management). Each dimension is scored using a 6-point rating scale. This tool has been shown to be a feasible, valid, and a reliable measure of clinical competence, provided enough encounters (approximately 8 encounters) are assessed.[23] Clinical Work Sampling (CWS) This assessment tool is also based on direct observation of clinical performance in the workplace and requires collection of data relating to specific patient encounters for a number of different domains either at the time of admission (admission rating form) or during the hospital stay (ward rating form). These forms are filled by faculty members who directly observe student performance. Students are also assessed by nursing staff and the patients in their care. All rating forms use a 5-point rating scale ranging from unsatisfactory to excellent performance.[24] Blinded Patient Encounters (BPE) This formative assessment tool, so called because the patient is unknown (blinded) to the student, forms part of undergraduate bedside teaching sessions. Students, in small groups (4–5 students) participate in a bedside teaching session of direct observation in which, one of them performs a focused interview or physical examination as instructed by the tutor. Thereafter, the student is asked to provide a diagnosis and a differential diagnosis, based on the clinical findings. Once the presentation is over, the tutor lays emphasis on demonstrating the relevant clinical features of the case and discussion on the management of patient's presenting clinical condition. The session concludes with feedback on his/her performance.[25] Portfolios A portfolio is a collection of student work which provides evidence of achievement of learning that has taken place over time. It includes documentation of learning and progression, and its key feature is reflection on these learning experiences. Portfolio documentation may include case reports; record of practical procedures undertaken; audio/video recordings of consultations; project reports; ethical dilemmas encountered and their handling; learning plans, and written reflections about what has been learnt from the evidence provided.[26] However, Portfolios may not always be considered very practical due to the time and intense effort involved in its compilation and evaluation.[27] Patient surveys Patient surveys are used to assess satisfaction among patients with regard to hospital or clinic visits and mostly include questions about the care provided by the physician. The questions are relevant to physician care and typically pertain to time given to the patient, knowledge, and skill-based competence of the physician, professional attributes of the physician and the overall quality of care.[28] Expert subjective judgment is gradually making a comeback for assessment of clinical competence, especially for CBME,[29] and many of the tools mentioned above use the power of expert subjectivity. As argued earlier, subjectivity doesn't mean arbitrariness and can give results which are as reliable as any other method, with the advantage that it allows assessment of many traits which were excluded from assessment due to non-availability of suitable objective assessment tools.
van der Vleuten described utility of an assessment as a notional product of five attributes, viz. reliability, validity, feasibility, acceptability, and educational impact, implying that limitations of one attribute can be compensated by another attribute, thereby improving the overall utility of assessment tool.[5] As such, assessment requires a lot of compromises regarding validity and reliability, many of which happen in an unplanned manner. Using the concept of utility, a meaningful compromise can be made.[30] This also makes it possible to use tools with supposedly lower reliability (e.g., subjective assessment of say professionalism) but with high educational impact. No assessment tool is inherently good or bad, but it is the way the assessment tool is utilized, which makes all the difference. Teacher's proficiency at using the right tool, in the right context and deriving the right inference from them determines the validity and simultaneously affects the reliability of an assessment tool.[31] Some assessment tools like CSR/CBD, DOPS, CEC, CWS, m-CEX, Portfolios, mPAT have high educational impact. This may be attributed to variable proportions of some inherent and useful properties such as specificity; standard criteria of assessment based on validated performance standards; advantage of testing multiple competencies; opportunity to seek feedback from several experts/assessors; direct observation; multiple feedback opportunities; built-in feedback; reflections; documentation; and real patient encounters (as in CWS, CEC, mini-CEX, CSR/CBD). Many of these tools with high educational impact have a purposive fragmentation of a single comprehensive task/competency. This enables identification of the nadir, in a stepwise fashion. Whether it is the 9-point rating scale in BPE and m-CEX or 6-point rating scale of CEC, multiple encounters act as essential checkpoints in an apparently continuous process of learning and assessment.
As stated earlier, toolbox provides general guidance about availability and possible use of a tool—it is not a compendium on assessment tools. The validity of interpretations will largely depend on the way a tool is used. It may also be noted that many of these tools are easy to use for formative and internal assessments (e.g., MSF, mPAT, portfolios, and reflections) than for summative ones. For ease of presentation, the toolbox table for global competencies as per roles assigned for IMG has been bifurcated in the following sections. Though the ratings which can be assigned to each attribute of assessment tool are relative and contextual, we have tried to align the assessment tools detailed above to the Miller's level of assessment as well as assign a generalized rating to assessment attributes for each tool, [Table 2] based upon available literature.[30],[32],[33] However, let's hasten to add that the ratings [Table 3], [Table 4], [Table 5], [Table 6], [Table 7] are in effect prioritization and reflect a nominal rather than an ordinal perspective.
Role as Clinician—Assessment of related global competencies As documented, IMG must be a “Clinician, who understands and provides preventive, promotive, curative, palliative and holistic care withcompassion.”[5] The suggested tools of assessment of global competencies related to role as clinician have been listed in [Table 3]. Role as leader—Assessment of related global competencies IMG must be a “Leader and member of the health care team and system,” as documented in the amended MCI Graduate Medical Education Regulations.[5] The suggested tools of assessment of global competencies related to role as leader have been listed in [Table 4]. Role as Communicator—Assessment of related global competencies Communication skills, both verbal and non-verbal are very important for a medical professional for developing report with the patient. Communication skills have been documented to improve patient compliance to the treatment.[34] Studies have shown that effective doctor-patient communication results in better health outcomes.[35] Good communication skills make patient an informed partner in the management plan and decreases chances of subsequent litigations against doctors.[36] As per MCI document, an IMG should be a ”Communicator with patients, families, colleagues and community”.[5] For assessment of global competencies related to IMG's role as a communicator, the suggested tools have been documented in [Table 5]. Role as life-long learner—Assessment of related global competencies Life-long learning is referred to as learning practiced by the individual for the whole life, is flexible, and is accessible at all times.[37] In medical profession, new evidence is added to the existing literature every now and then, new diagnostic tests are explored, new treatment guidelines are published, and a medical graduate needs to be “updated” in all these guidelines. MCI has documented that IMG should be a “Life-long learner committed to continuous improvement of skills and knowledge”.[5] For assessment of the global competencies related to lifelong learner, the suggested tools have been documented in [Table 6]. Role as professional—Assessment of related global competencies Professionalism is a habitual construct which includes key beliefs and virtues that will build the trust of the public on doctors.[38] American Board of Medical Specialties asserts that “Medical professionalism is a (normative) belief system about how best to organize and deliver health care, which calls on group members to jointly declare (profess) what the public and individual patients can expect regarding shared competency standards and ethical values and to implement trustworthy means to ensure that all medical professionals live up to these promises”.[39] IMG should be a “Professional who is committed to excellence, is ethical, responsive and accountable to patients, community and the profession.”[5] For assessment of global competencies related to role as professional, various suggested tools have been documented in [Table 7].
Value of educational feedback to students in their professional development is unequivocal.[40] Feedback is a commitment towards helping in professional development of the trainee by directly observing the academic performance and can be a useful aid to pick up the borderline performers. Hence, tool alone is neither the sole culprit for poor performance standards nor the sole contender for a good performance. Feedback, the less noticeable accomplice of assessment, also needs standardization, much before educational impact of assessment tools can be compared in their true, undiluted essence. Feedback which is time-appropriate, specific, and synchronized with the learning cycle helps correct the fluid cognitive processes while they are being consolidated in the mind.[40] However, it will be most effective when embedded into the blueprint of formative assessments where they are systematically and intentionally followed by feedback. Since specificity, direct observation, multiple assessors, multiple opportunities are built in the assessment tools like CWS, CEC, m-CEX, CSR/CBD, they become more compatible with the ideology of a feedback and the two go hand-in hand. Since learning difficulties can be student specific, tools having multiple encounters or checkpoints help identify specific learning difficulties and tailor feedback accordingly. The potential of tools providing opportunities for formative assessment and feedback needs to be tapped for acquisition of competencies by the students.
Any curriculum is a live document, amenable to develop with time. New CBME curriculum adopted by the regulatory body for undergraduate training in India is in its infancy only. All kinds of theoretical, empirical, pragmatic, and experiential evidences will be needed in collaborative way to develop it in right direction. Many modules are being released by the regulatory body to enrich the existing knowledge about implementation of such a program. The tools detailed in this paper will be one step further for supplementing the existing guidelines about assessment in this new competency-based curriculum. We have developed this theory driven toolbox on a best-match basis. The actual use of these tools or the benefits from them will depend on multiple factors. Adequacy and representativeness of samples will be a major influencer as will be assessment literacy, experience, and expertise of the assessor. And this is not unexpected! Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
[Figure 1], [Figure 2]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7]
|
|
|||||||