Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & ISI's SCI  
Users online: 617  
Home | Subscribe | Feedback | Login 
About Latest Articles Back-Issues Articlesmenu-bullet Search Instructions Online Submission Subscribe Etcetera Contact
 
  NAVIGATE Here 
  Search
 
  
 RESOURCE Links
 ::  Similar in PUBMED
 ::  Search Pubmed for
 ::  Search in Google Scholar for
 ::Related articles
 ::  Article in PDF (878 KB)
 ::  Citation Manager
 ::  Access Statistics
 ::  Reader Comments
 ::  Email Alert *
 ::  Add to My List *
* Registration required (free) 

  IN THIS Article
 ::  Abstract
 :: Introduction
 ::  Concept of Asses...
 ::  Data Extraction ...
 ::  Assessment Tools...
 ::  Utility of Asses...
 ::  Toolbox for Glob...
 ::  Need for Feedbac...
 :: Conclusion
 ::  References
 ::  Article Figures
 ::  Article Tables

 Article Access Statistics
    Viewed1365    
    Printed22    
    Emailed0    
    PDF Downloaded5    
    Comments [Add]    

Recommend this journal


 


 
  Table of Contents     
EDUCATION FORUM
Year : 2021  |  Volume : 67  |  Issue : 2  |  Page : 80-90

Assessment toolbox for Indian medical graduate competencies


1 Medical Education Unit, Sri Guru Ram Das University of Health Sciences, Amritsar, Punjab, India
2 Department of Physiology, Smt. NHL Municipal Medical College, Ahmedabad, Gujarat, India
3 Department of Community Medicine, Adesh Medical College and Hospital, Kurukshetra, Haryana, India
4 Department of Pharmacology, Himalayan Institute of Medical Sciences, Dehradun, Uttarakhand, India
5 Department of Pharmacology, Adesh Institute of Medical Sciences and Research, Bathinda, Punjab, India

Date of Submission21-Nov-2020
Date of Decision14-Feb-2021
Date of Acceptance25-Mar-2021
Date of Web Publication30-Apr-2021

Correspondence Address:
R Mahajan
Department of Pharmacology, Adesh Institute of Medical Sciences and Research, Bathinda, Punjab
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpgm.JPGM_1260_20

Rights and Permissions


 :: Abstract 


The new competency-based medical education curriculum for Bachelor of Medicine and Bachelor of Surgery is being implemented in a phased manner in medical colleges across India since the year 2019. The Graduate Medical Education Regulations enlist a total of 35 global competencies for the five roles expected of an Indian medical graduate, the roles being clinician, communicator, leader, professional, and life-long learner. Along with an effective implementation of the new curriculum, both in spirit and in action, it is imperative to assess the listed competencies. The new curriculum demands a more careful and mature selection of assessment tools, based on the competency and its expected level of achievement. It is these two variables that make choosing the right assessment method not just a matter of choice, but also of expertise. An array of tools in our armamentarium can sometimes separate confuse and the teachers. So, using the right tool, in the right context, at the right juncture, supplemented by other tools, and backed by constructive feedback, can help nurture the good intent ingrained in the competency-based curriculum. Hence, an attempt was made to compile an assessment toolbox for various global competencies. A PubMed, Science Direct and Google Scholar search, with relevant keywords was carried out. To the initially extracted 90,121 articles, limitations were applied, duplicates were removed and screening for assessment of global competencies and its attributes was done to select 232 articles. Finally, 31 articles were used for designing the proposed toolbox. Prioritization for the tools for the global competencies was based on thorough literature review and extensive discussion. The evolved assessment toolbox is presented in this article, which would help teachers pick the most useful methods of assessment for global competencies.


Keywords: Assessment, assessment toolbox, competency, competency-based medical education, feedback, global competencies, Indian medical graduate


How to cite this article:
Singh T, Saiyad S, Virk A, Kalra J, Mahajan R. Assessment toolbox for Indian medical graduate competencies. J Postgrad Med 2021;67:80-90

How to cite this URL:
Singh T, Saiyad S, Virk A, Kalra J, Mahajan R. Assessment toolbox for Indian medical graduate competencies. J Postgrad Med [serial online] 2021 [cited 2021 May 16];67:80-90. Available from: https://www.jpgmonline.com/text.asp?2021/67/2/80/315367





 :: Introduction Top


Medical Council of India (MCI) adopted the competency-based medical education (CBME) for the training of medical undergraduates in India for the academic session 2019.[1] This adoption is achieved after extensive exercise of faculty development and capacity building through training of the medical faculty in basic course, advance course, curriculum implementation support program (CISP), framing of draft guidelines, and rectifying those guidelines after placing them in public domain. The Indian Medical Graduate (IMG) has been defined as, “a graduate possessing requisite knowledge, skills, attitudes, values and responsiveness, so that she or he may functionappropriately and effectively as a physician of first contact of the community while being globally relevant”, and the new curriculum is an effort to ensure that every graduate passing out of medical colleges competent to perform these roles.

A set of global competencies, besides subject-specific competencies, have been documented for IMG for fulfilling five roles of IMG viz., clinician, leader, communicator, life-long leaner, and professional. For a better contextual understanding, it may be worthwhile to look at the definition of competency, which means “habitual, consistent and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflections in daily practice for the benefit of the individual being served”.[2] A perusal of this definition makes it clear that competency-based assessment has to consider many attributes other than knowledge and skills and also their application in a consistent and habitual manner. Continuous mentoring, feedback, and self-reflection in a learning-oriented educational environment, besides many other factors, eventually will lead to professional development of a student into an IMG [Figure 1].
Figure 1: Professional development of an Indian Medical Graduate

Click here to view


What is going to change for undergraduate training under CBME? Assessment is going to see a paradigm shift, not only in the nature of tools used, but also in the ways the assessment will be used, and inferences which will be drawn from it. Not only knowledge and skills but also attributes like clinical reasoning, emotions, values, and reflections need to be assessed. Assessment will have to be designed in a way to create more of learning opportunities and to contribute towards professional development of the student. It needs to be fortified by continuous feedback to the student. Though MCI has delineated certain specifications for assessing subject-specific competencies, no such framework has been provided for global competencies. In this article, we have tried to develop an assessment toolbox for assessment of 35 global competencies of the IMG.


 :: Concept of Assessment Toolbox Top


Assessment toolbox borrows largely from the example of a mechanic who always carries his full toolbox, though all the tools are not needed during every repair work. Essentially, a toolbox has the following characteristics:

  1. It will have multiple tools for various types of tasks.
  2. Not all tools will be used on all occasions.
  3. If the main tool is not useable for any reason, an alternate can be used. The alternate tool can also be used to supplement the effect of the main tool.
  4. Each task can be performed by multiple alternative tools.
  5. Each tool can perform more than one tasks.
  6. The result depends more on the way of using the tool and the expertise of the user rather than on the tool per se.


Let's illustrate the concept of toolbox by taking an example. Suppose the bulb in your room is not working and you call an electrician. He will come to your house with his toolbox, which will not only have a spare bulb but will also have wires, screwdriver, hammer, pliers, and even a drill machine. Chances are that he will simply replace the bulb. However, if he finds that the wires are also loose, he will use the screwdriver to tighten them. Just in case, the holder is coming off the wall, he may use the hammer and pliers also to fix it. In some other situations, he may also need to drill a hole in the wall to make the fixtures sturdy.

Assessment toolbox works the same way. In most situations, the first mentioned tool can be used but, if needed, even a second or third may have to be used. For example, if after using a long case, it is felt that the student has issues with eliciting physics signs, he can be given an objective structured clinical examination (OSCE) for identification and remediation of specific issues.[3] Similarly, if communication is identified as a problem, mini peer assessment tool (mPAT) can be used for more specific information and feedback. This provides flexibility in using tools depending on the needs and requirements, by encouraging use of multiple tools for one competency and assessing multiple competencies by one tool. It is thus a move away from “one tool-one competency” model, commonly prevalent in our setting.

In assessment toolbox explained in next sections, prioritization has been done for tools in context of different global competencies. These “rankings” represent the “expert subjective judgment” of the authors, reached after thorough discussion and literature review. The “ranking” of the tools does not indicate superiority or inferiority of any tool. It only indicates desirability and utility and provides flexibility in case a tool is not usable for any reason. From that perspective, the “rankings” are more nominal than ordinal. Fallout of this approach is to build capacity of the assessors to use all the tools. Actual use will of course be decided by the context, assessment literacy, experience, and expertise of the assessor. World over, competency curricula generally suggest a toolbox from which the assessors can pick and choose the tools.[4]


 :: Data Extraction Methodology Top


Data were collected by using “assessment of core competencies,” “assessment of professionalism,” assessment of communication skills,” “assessment of competencies,” “workplace based assessment,” “direct observation of procedural skills” as search keywords in English language on PubMed, Google scholar, and Science Direct search engines and by searching reference-list of extracted articles (reference of reference). Limitations applied were—articles after 1990, English language, and studies in medical education. Full papers relevant to our objectives were used for this review. This resulted in selection of 3,637 articles. The list was scaled down to 232 after removing duplicates and screening for assessment of global competencies and its attributes. Finally, 31 articles were used for compiling the proposed toolbox, after ascertaining scientific value of the papers and excluding articles targeting same assessment tool/method [Figure 2].
Figure 2: Data extraction methodology

Click here to view



 :: Assessment Tools for Assessing Global Competencies Top


Regulations on Graduate Medical Education (2019) contemplate that the IMG should exhibit 35 core competencies at the time of graduation to fulfill the five roles. Clinical competence is a complex construct requiring multiple tools for its assessment. Common assessment tools that can be used for assessment of various competencies have been listed in [Table 1]. A variety of assessment tools are available, each having its strengths and weaknesses. Hence, multiple assessment tools are recommended for assessment of clinical competence, not only to compensate the weaknesses but also to complement and supplement the strengths of various tools. The concept of utility of assessment described by van der Vleuten—validity, reliability, acceptability, educational impact, and acceptability[5] can be used to suggest appropriate tools.
Table 1: Available assessment tools for competency assessment

Click here to view


The tools suggested for assessment range from the commonly used ones like multiple choice questions (MCQs), to relatively uncommon ones like m-CEX to entirely unheard ones like script concordance test (SCT). Though no single tool can be labeled as complete, but a combination can address most global competencies. For example, OSCE, BPE, SP, DOPS, and tools based on rating scales like m-CEX, test clinical skills very well along with communication skills, professionalism, and student's leadership qualities, as and when applicable. Those stimulating reasoning, –such as SCT, case-based discussion (CBDs), may stimulate higher thinking, and self-directed learning; thus motivating students towards becoming life-long learners in a very subtle manner. Multi source feedback (MSF), mini peer assessment tool (mPAT), and patient survey (PS) can be used for receiving feedback regarding professional behavior, leadership, and communication. Long- and short cases can assess most global competencies very well as they are rather holistic, and they require a strong knowledge base, involve assessment of clinical skills as well as communication skills, but in case of specific issues may need to be supplemented as explained above. It may be mentioned that the psychometric characteristics of most tools are dependent on the way they are used, rather than being their innate property.[6]

Many of the common tools like MCQs, short answer questions (SAQs), essay-type questions (EAQs), OSCE, MSF, mPAT, standardized patient (SP), global rating, check-lists, key feature test (KFT) have been described earlier, along with their strengths and limitations.[7],[8] Some other tools are being described here briefly. Long case has been purposely included in the description because of certain issues around its use.

Script Concordance Test (SCT)

Script concordance test (SCT) is a relatively newer assessment tool designed to reflect students' competence in interpreting clinical data under ambiguous or uncertain conditions and aims to simulate authentic conditions of medical practice. SCTs are comprised of a sequence of short clinical scenarios/cases/vignettes, which are followed by a set of questions having three parts. For each question, the first part (”If you were thinking of…”) provides a hypothesis in the form of a diagnostic, therapeutic or a prognostic or bioethical consideration; the second part (”…and then you find…”) presents additional information, such as a physical examination finding, a pre-existing condition, an imaging study or a laboratory test result, that may (or may not) have an effect on the given option. The question is answered in the third part (”…this hypothesis becomes:”), which contains a 5-point Likert-type response scale (ranging from -2 to + 2). Students are required to indicate on the Likert scale what they think would be the possible effect of the new information (part 2) on the proposed hypothesis (part 1).[9] The SCT format is developed to test clinical reasoning in uncertain situations and is based on “the principle that the multiple judgments made in these clinical reasoning processes can be probed and their concordance with those of a panel of reference experts can be measured”.[10]

Short case

Short cases are used to assess clinical competence. Students are asked to perform a supervised specific physical examination of a real patient, and are then assessed on the examination technique, the ability to elicit physical signs and interpret their findings in a correct manner. In order to increase the sample size and reliability, several cases may be used in any one assessment.[11]

Long case

Traditionally, the students are allotted a long case in which they elicit history and examine a real patient for an uninterrupted and unobserved time, ranging from 30 to 45 minutes. The students then summarize their findings and plan to the examiners who follow it up with an unstructured oral examination about the patient's problem and relevant topics. The long case has been in use in both summative as well as formative examinations on account of its perceived educational impact.[12]

Various modifications have been introduced to build validity and reliability into the long case format as an assessment tool such as, observing the students during history taking and conducting the physical examination, training the examiners to structure the oral examination process, and increasing the number of cases seen by the student.[13] A more structured presentation of an unobserved long case was developed by Gleeson, the objective-structured long examination record (OSLER), which includes a structured format of the long case with direct observation of the student while interacting with the patient. OSLER is a powerful tool for providing feedback and thus has tremendous potential to increase clinical competence.[14]

It may be pertinent to elaborate about the long case—which forms the mainstay of clinical assessment in India—as an assessment tool. For last many years now, the long case doesn't find a mention in contemporary assessment literature and seems to have been written off,[15] for various reasons. Even add-ons like OSLER didn't improve its acceptance. To be fair, the long case resembles the actual clinical encounter much better than say OSCE[4]; however, to achieve an acceptable reliability for summative decisions, minimum 10 cases and 20 examiners may be needed.[16] We, therefore, need to make deliberate effort to overcome some of its limitations and supplement it with other methods like OSCE and m-CEX. Using some time to observe the student taking history or performing physical examination—something like an “extended” OSCE—can add to the assessment information. The MCI module on assessment has also made that suggestion.[17]

Chart Stimulated Recall Oral Examination (CSR)/Case-Based Discussion (CBD)

In a chart-stimulated recall (CSR) examination students are assessed on clinical cases by a structured and standardized oral examination. Examiner asks questions to the student relating to the care provided while probing for justification behind the case work-up, diagnosis, interpretation of clinical findings, and case management plans. Each examination encounter is expected to last about 20 minutes, including 5 minutes of feedback. In some settings, CSRs are also called case-based discussions (CBDs). CSRs promote autonomy and self-directed learning as they follow a “one to one” discussion and reflection format.[18]

Direct observation of procedural skills (DOPS)

DOPS is a structured rating scale that ensures that students are given specific feedback based on direct observation so as to improve their procedural skills. Commonly performed procedures for which students are expected to demonstrate competence including endotracheal intubation, nasogastric tube insertion, administration of intravenous medication, venepuncture, peripheral venous cannulation, and arterial blood sampling can be assessed using DOPS. These are best assessed by multiple clinicians on multiple occasions throughout the training period.[19]

Mini Clinical Evaluation Exercises (m-CEX)

Mini clinical evaluation exercise (m-CEX) requires students to interact with patients in authentic workplace-based patient encounters while being observed by faculty members. Students perform clinical activities, such as eliciting a focused history or performing physical examination, after which they summarize the patient encounter along with next steps (e.g., a clinical diagnosis and a management plan). Each aspect of the clinical encounter is scored by a faculty member using a 9–point rating scale.[20] The m-CEX is mostly used for formative purposes.[21]

Professionalism mini clinical evaluation exercise (P-MEX)

P-MEX is a modified version of m-CEX, specially assessing professionalism. It is a structured observation tool containing 24 items, which are rated on a 4-point scale. Multiple observations are carried out and results are discussed with the student. Its reliability and validity have been reported as acceptable.[22]

Clinical Encounter Cards (CEC)

The CEC system is quite similar to the mini-CEX. It assesses and scores dimensions of observed clinical practice such as: history-taking, physical examination, professional behavior, technical skill, case presentation, problem formulation (diagnosis), and problem-solving (management). Each dimension is scored using a 6-point rating scale. This tool has been shown to be a feasible, valid, and a reliable measure of clinical competence, provided enough encounters (approximately 8 encounters) are assessed.[23]

Clinical Work Sampling (CWS)

This assessment tool is also based on direct observation of clinical performance in the workplace and requires collection of data relating to specific patient encounters for a number of different domains either at the time of admission (admission rating form) or during the hospital stay (ward rating form). These forms are filled by faculty members who directly observe student performance. Students are also assessed by nursing staff and the patients in their care. All rating forms use a 5-point rating scale ranging from unsatisfactory to excellent performance.[24]

Blinded Patient Encounters (BPE)

This formative assessment tool, so called because the patient is unknown (blinded) to the student, forms part of undergraduate bedside teaching sessions. Students, in small groups (4–5 students) participate in a bedside teaching session of direct observation in which, one of them performs a focused interview or physical examination as instructed by the tutor. Thereafter, the student is asked to provide a diagnosis and a differential diagnosis, based on the clinical findings. Once the presentation is over, the tutor lays emphasis on demonstrating the relevant clinical features of the case and discussion on the management of patient's presenting clinical condition. The session concludes with feedback on his/her performance.[25]

Portfolios

A portfolio is a collection of student work which provides evidence of achievement of learning that has taken place over time. It includes documentation of learning and progression, and its key feature is reflection on these learning experiences. Portfolio documentation may include case reports; record of practical procedures undertaken; audio/video recordings of consultations; project reports; ethical dilemmas encountered and their handling; learning plans, and written reflections about what has been learnt from the evidence provided.[26] However, Portfolios may not always be considered very practical due to the time and intense effort involved in its compilation and evaluation.[27]

Patient surveys

Patient surveys are used to assess satisfaction among patients with regard to hospital or clinic visits and mostly include questions about the care provided by the physician. The questions are relevant to physician care and typically pertain to time given to the patient, knowledge, and skill-based competence of the physician, professional attributes of the physician and the overall quality of care.[28]

Expert subjective judgment is gradually making a comeback for assessment of clinical competence, especially for CBME,[29] and many of the tools mentioned above use the power of expert subjectivity. As argued earlier, subjectivity doesn't mean arbitrariness and can give results which are as reliable as any other method, with the advantage that it allows assessment of many traits which were excluded from assessment due to non-availability of suitable objective assessment tools.


 :: Utility of Assessment Top


van der Vleuten described utility of an assessment as a notional product of five attributes, viz. reliability, validity, feasibility, acceptability, and educational impact, implying that limitations of one attribute can be compensated by another attribute, thereby improving the overall utility of assessment tool.[5] As such, assessment requires a lot of compromises regarding validity and reliability, many of which happen in an unplanned manner. Using the concept of utility, a meaningful compromise can be made.[30] This also makes it possible to use tools with supposedly lower reliability (e.g., subjective assessment of say professionalism) but with high educational impact. No assessment tool is inherently good or bad, but it is the way the assessment tool is utilized, which makes all the difference. Teacher's proficiency at using the right tool, in the right context and deriving the right inference from them determines the validity and simultaneously affects the reliability of an assessment tool.[31]

Some assessment tools like CSR/CBD, DOPS, CEC, CWS, m-CEX, Portfolios, mPAT have high educational impact. This may be attributed to variable proportions of some inherent and useful properties such as specificity; standard criteria of assessment based on validated performance standards; advantage of testing multiple competencies; opportunity to seek feedback from several experts/assessors; direct observation; multiple feedback opportunities; built-in feedback; reflections; documentation; and real patient encounters (as in CWS, CEC, mini-CEX, CSR/CBD).

Many of these tools with high educational impact have a purposive fragmentation of a single comprehensive task/competency. This enables identification of the nadir, in a stepwise fashion. Whether it is the 9-point rating scale in BPE and m-CEX or 6-point rating scale of CEC, multiple encounters act as essential checkpoints in an apparently continuous process of learning and assessment.


 :: Toolbox for Global Competencies Top


As stated earlier, toolbox provides general guidance about availability and possible use of a tool—it is not a compendium on assessment tools. The validity of interpretations will largely depend on the way a tool is used. It may also be noted that many of these tools are easy to use for formative and internal assessments (e.g., MSF, mPAT, portfolios, and reflections) than for summative ones. For ease of presentation, the toolbox table for global competencies as per roles assigned for IMG has been bifurcated in the following sections.

Though the ratings which can be assigned to each attribute of assessment tool are relative and contextual, we have tried to align the assessment tools detailed above to the Miller's level of assessment as well as assign a generalized rating to assessment attributes for each tool, [Table 2] based upon available literature.[30],[32],[33] However, let's hasten to add that the ratings [Table 3], [Table 4], [Table 5], [Table 6], [Table 7] are in effect prioritization and reflect a nominal rather than an ordinal perspective.
Table 2: Utility of assessment tools[2]

Click here to view
Table 3: IMG role as clinician: Suggested prioritization of the assessment tools for related global competencies

Click here to view
Table 4: IMG role as leader: Suggested prioritization of the assessment tools for related global competencies

Click here to view
Table 5: IMG role as communicator: Suggested prioritization of the assessment tools for related global competencies

Click here to view
Table 6: IMG role as life-long learner: Suggested prioritization of the assessment tools for related global competencies

Click here to view
Table 7: IMG role as professional: Suggested prioritization of the assessment tools for related global competencies

Click here to view


Role as Clinician—Assessment of related global competencies

As documented, IMG must be a “Clinician, who understands and provides preventive, promotive, curative, palliative and holistic care withcompassion.”[5] The suggested tools of assessment of global competencies related to role as clinician have been listed in [Table 3].

Role as leader—Assessment of related global competencies

IMG must be a “Leader and member of the health care team and system,” as documented in the amended MCI Graduate Medical Education Regulations.[5] The suggested tools of assessment of global competencies related to role as leader have been listed in [Table 4].

Role as Communicator—Assessment of related global competencies

Communication skills, both verbal and non-verbal are very important for a medical professional for developing report with the patient. Communication skills have been documented to improve patient compliance to the treatment.[34] Studies have shown that effective doctor-patient communication results in better health outcomes.[35] Good communication skills make patient an informed partner in the management plan and decreases chances of subsequent litigations against doctors.[36] As per MCI document, an IMG should be a ”Communicator with patients, families, colleagues and community”.[5] For assessment of global competencies related to IMG's role as a communicator, the suggested tools have been documented in [Table 5].

Role as life-long learner—Assessment of related global competencies

Life-long learning is referred to as learning practiced by the individual for the whole life, is flexible, and is accessible at all times.[37] In medical profession, new evidence is added to the existing literature every now and then, new diagnostic tests are explored, new treatment guidelines are published, and a medical graduate needs to be “updated” in all these guidelines. MCI has documented that IMG should be a “Life-long learner committed to continuous improvement of skills and knowledge”.[5] For assessment of the global competencies related to lifelong learner, the suggested tools have been documented in [Table 6].

Role as professional—Assessment of related global competencies

Professionalism is a habitual construct which includes key beliefs and virtues that will build the trust of the public on doctors.[38] American Board of Medical Specialties asserts that “Medical professionalism is a (normative) belief system about how best to organize and deliver health care, which calls on group members to jointly declare (profess) what the public and individual patients can expect regarding shared competency standards and ethical values and to implement trustworthy means to ensure that all medical professionals live up to these promises”.[39] IMG should be a “Professional who is committed to excellence, is ethical, responsive and accountable to patients, community and the profession.”[5] For assessment of global competencies related to role as professional, various suggested tools have been documented in [Table 7].


 :: Need for Feedback and its Utility Top


Value of educational feedback to students in their professional development is unequivocal.[40] Feedback is a commitment towards helping in professional development of the trainee by directly observing the academic performance and can be a useful aid to pick up the borderline performers. Hence, tool alone is neither the sole culprit for poor performance standards nor the sole contender for a good performance. Feedback, the less noticeable accomplice of assessment, also needs standardization, much before educational impact of assessment tools can be compared in their true, undiluted essence.

Feedback which is time-appropriate, specific, and synchronized with the learning cycle helps correct the fluid cognitive processes while they are being consolidated in the mind.[40] However, it will be most effective when embedded into the blueprint of formative assessments where they are systematically and intentionally followed by feedback. Since specificity, direct observation, multiple assessors, multiple opportunities are built in the assessment tools like CWS, CEC, m-CEX, CSR/CBD, they become more compatible with the ideology of a feedback and the two go hand-in hand.

Since learning difficulties can be student specific, tools having multiple encounters or checkpoints help identify specific learning difficulties and tailor feedback accordingly. The potential of tools providing opportunities for formative assessment and feedback needs to be tapped for acquisition of competencies by the students.


 :: Conclusion Top


Any curriculum is a live document, amenable to develop with time. New CBME curriculum adopted by the regulatory body for undergraduate training in India is in its infancy only. All kinds of theoretical, empirical, pragmatic, and experiential evidences will be needed in collaborative way to develop it in right direction. Many modules are being released by the regulatory body to enrich the existing knowledge about implementation of such a program. The tools detailed in this paper will be one step further for supplementing the existing guidelines about assessment in this new competency-based curriculum.

We have developed this theory driven toolbox on a best-match basis. The actual use of these tools or the benefits from them will depend on multiple factors. Adequacy and representativeness of samples will be a major influencer as will be assessment literacy, experience, and expertise of the assessor. And this is not unexpected!

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
 :: References Top

1.
Medical Council of India. The regulations on graduate medical education, 1997 – Part II; 2019. Available from: https://www.nmc.org.in/ActivitiWebClient/open/getDocument?path=/Documents/Public/Portal/Gazette/GME-06.11.2019.pdf. [Last accessed on 2021 Feb 28].  Back to cited text no. 1
    
2.
Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA 2002;287:226-35.  Back to cited text no. 2
    
3.
Holmboe ES, Lobst WF. Assessment Guidebook. Accreditation Council for Graduate Medical Education, 2020. Available from: https://www.acgme.org/Portals/0/PDFs/Milestones/Guidebooks/AssessmentGuidebook.pdf?ver=2020-11-18-155141-527. [Last accessed on 2021 Feb 28].  Back to cited text no. 3
    
4.
Norcini JJ. The validity of long cases. Med Educ 2001;35:720-1.  Back to cited text no. 4
    
5.
vander Vleuten CP. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ Theory Pract 1996;1:41-67.  Back to cited text no. 5
    
6.
Verma M, Chhatwal J, Singh T. Reliability of essay type questions – effect of structuring. Assess Educ Princ Policy Pract1997;4:265-70.  Back to cited text no. 6
    
7.
Gupta S, Mahajan R, Singh T. Assessment of clinical competence: Supplementing existing tools. J Res Med Educ Ethics 2017;7:74-84.  Back to cited text no. 7
    
8.
Gupta P, Dewan P, Singh T. Objective structured clinical examination (OSCE) revisited. Indian Pediatr 2010;47:911-20.  Back to cited text no. 8
    
9.
Kaur M, Singla S, Mahajan R. Script concordance test in Pharmacology: Maiden experience from a medical school in India. J Adv Med Educ Prof 2020;8:115-20.  Back to cited text no. 9
    
10.
Lubarsky, Charlin B, Cook DA, Chalk C, van der Vleuten CP. Script concordance testing: A review of published validity evidence. Med Educ 2011:45:329-38.  Back to cited text no. 10
    
11.
Hijazi Z, Premadasa IG, Moussa MA. Performance of students in the final examination in paediatrics: Importance of the “short cases.” Arch Dis Child 2002;86:57–8.  Back to cited text no. 11
    
12.
Wass C, van der Vleuten CP. The long case. Med Educ 2004;38:1176-80.  Back to cited text no. 12
    
13.
Ponnamperuma GG, Karunathilake IM, McAleer S, Davis MH. The long case and its modifications: A literature review. Med Educ 2009;43:936-41.  Back to cited text no. 13
    
14.
Gleeson F. AMEE Medical Education Guide No. 9. Assessment of clinical competence using the objective structured long examination record (OSLER). Med Teach 1997;19:7-14.  Back to cited text no. 14
    
15.
Norcini JJ. The death of the long case? BMJ 2002;324:408-9.  Back to cited text no. 15
    
16.
Wass V, Jones R, van der Vleuten CP. Standardised or real patients to test clinical competence? The long case revisited. Med Educ 2001;35:321-5.  Back to cited text no. 16
    
17.
Medical Council of India. Assessment Module for Undergraduate Medical Education Training Program, 2019. p 17. Available from: https://www.nmc.org.in/wp-content/uploads/2020/08/Module_Competence_based_02.09.2019.pdf. [Last accessed on 2021 Mar 03].  Back to cited text no. 17
    
18.
Sinnott C, Kelly MA, Bradley CP. A scoping review of the potential for chart stimulated recall as a clinical research method. BMC Health Serv Res 2017;17:583.  Back to cited text no. 18
    
19.
Wragg A, Wade W, Fuller G, Cowan G, Mills P. Assessing the performance of specialist registrars. Clin Med 2003;3:131-4.  Back to cited text no. 19
    
20.
Norcini JJ, Blank LL, Duffy FD, Fortna G. The mini-CEX: A method for assessing clinical skills. Ann Inter Med 2003;138:476-81.  Back to cited text no. 20
    
21.
Singh T, Sharma M. Mini-clinical examination (CEX) as a tool for formative assessment. Natl Med J India 2010;23:100-2.  Back to cited text no. 21
    
22.
Cruess R, McIlroy JH, Cruess S, Ginsburg S, Steinert Y. The professionalism mini-evaluation exercise: A preliminary investigation. Acad Med 2006;10(Suppl):S74-8.  Back to cited text no. 22
    
23.
Paukert JL, Richards ML, Olney C. An encounter card system for increasing feedback to students. Am J Surg 2002;183:300-4.  Back to cited text no. 23
    
24.
Turnbull J, MacFayden J, van Barneveld C, Norman G. Clinical works sampling. A new approach to the problem of in-training evaluation. J Gen Inter Med 2000;15:556-61.  Back to cited text no. 24
    
25.
Burch VC, Seggie JL, Gary NE. Formative assessment promotes learning in undergraduate clinical clerkships. S Afr Med J 2006;96:430-3.  Back to cited text no. 25
    
26.
Rees C. The use (and abuse) of the term 'portfolio'. Med Educ 2005;39:436-7.  Back to cited text no. 26
    
27.
Friedman Ben David M, Davis MH, Harden RM, Howie PW, Ker J, Pippard MJ. AMEE Medical Education Guide No. 24: Portfolios as a method of student assessment. Med Teach 2001;23:535-51.  Back to cited text no. 27
    
28.
Weaver MJ, Ow CL, Walker DJ, Degenhardt EF. A questionnaire for patients' evaluations of their physicians' humanistic behaviour. J Gen Int Med 1993;8:135-43.  Back to cited text no. 28
    
29.
Virk A, Joshi A, Mahajan R, Singh T. The power of subjectivity in competency-based assessment. J Postgrad Med 2020;66:200-5.  Back to cited text no. 29
[PUBMED]  [Full text]  
30.
van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to programmes. Med Educ 2005;39:309-17.  Back to cited text no. 30
    
31.
Sood R, Singh T. Assessment in medical education: Evolving perspectives and contemporary trends. Natl Med J India 2012;25:357-64.  Back to cited text no. 31
    
32.
Alves de Lima A, Conde D, Costabel J, Corso J, van der Vleuten C. A laboratory study on the reliability estimations of the mini-CEX. Adv Health Sci Educ Theory Pract 2013;18:5-13.  Back to cited text no. 32
    
33.
Wilkinson JR, Crossley JG, Wragg A, Mills P, Cowan G, Wade W. Implementing workplace based assessment across the medical specialties in the United Kingdom. Med Educ 2008;42:364-73.  Back to cited text no. 33
    
34.
Ha JF, Longnecker N. Doctor-patient communication: A review. Ochsner J 2010;10:38-43.  Back to cited text no. 34
    
35.
Stewart MA. Effective physician-patient communication and health outcomes: A review. Can Med Assoc J 1995;152:1423-33.  Back to cited text no. 35
    
36.
Huntington B, Kuhn N. Communication gaffes: A root cause of malpractice claims. Proc (Bayl Univ Med Cent) 2003;16:157-61.  Back to cited text no. 36
    
37.
Mahajan R, Badyal DK, Gupta P, Singh T. Cultivating life-long learning skills during graduate medical training. Indian Pediatr 2016;53:797-804.  Back to cited text no. 37
    
38.
Mahajan R, Aruldhas BW, Sharma M, Badyal DK, Singh T. Professionalism and ethics: A proposed curriculum for undergraduates. Int J App Basic Med Res 2016;6:157-63.  Back to cited text no. 38
[PUBMED]  [Full text]  
39.
Wynia MK, Papadakis MA, Sullivan WM, Hafferty FW. More than a list of values and desired behaviors: A foundational understanding of medical professionalism. Acad Med 2014;89:712-4.  Back to cited text no. 39
    
40.
Hattie J, Timperley H. The power of feedback. Rev Educ Res 2007;77:81-112.  Back to cited text no. 40
    


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6], [Table 7]



 

Top
Print this article  Email this article
 
Online since 12th February '04
2004 - Journal of Postgraduate Medicine
Official Publication of the Staff Society of the Seth GS Medical College and KEM Hospital, Mumbai, India
Published by Wolters Kluwer - Medknow