Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & ISI's SCI  
Users online: 5386  
Home | Subscribe | Feedback | Login 
About Latest Articles Back-Issues Articlesmenu-bullet Search Instructions Online Submission Subscribe Etcetera Contact
 
  NAVIGATE Here 
  Search
 
  
 RESOURCE Links
 ::  Similar in PUBMED
 ::  Search Pubmed for
 ::  Search in Google Scholar for
 ::Related articles
 ::  Article in PDF (591 KB)
 ::  Citation Manager
 ::  Access Statistics
 ::  Reader Comments
 ::  Email Alert *
 ::  Add to My List *
* Registration required (free) 

  IN THIS Article
 ::  Abstract
  ::  Introduction
Does Objectivity...
Objective Ver...
Expert Subjectiv...
Ways to Build Ri...
  ::  What Is a Rubric?
What, Why and Ho...
How to Construct...
  ::  the Way Forward
  ::  Conclusion
 ::  References
 ::  Article Figures

 Article Access Statistics
    Viewed5760    
    Printed254    
    Emailed0    
    PDF Downloaded30    
    Comments [Add]    
    Cited by others 2    

Recommend this journal


 


 
  Table of Contents     
EDUCATION FORUM
Year : 2020  |  Volume : 66  |  Issue : 4  |  Page : 200-205

The power of subjectivity in competency-based assessment


1 Adesh Medical College & Hospital, Shahabad (M), Haryana, India
2 Pramukhswami Medical College, Karamsad, Gujarat, India
3 Adesh Institute of Medical Sciences & Research, Bathinda, Punjab, India
4 SGRD Institute of Medical Sciences and Research, Amritsar, Punjab, India

Date of Submission25-May-2020
Date of Decision20-Jul-2020
Date of Acceptance08-Aug-2020
Date of Web Publication07-Oct-2020

Correspondence Address:
T Singh
SGRD Institute of Medical Sciences and Research, Amritsar, Punjab
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpgm.JPGM_591_20

Rights and Permissions


 :: Abstract 


With the introduction of competency-based undergraduate curriculum in India, a paradigm shift in the assessment methods and tools will be the need of the hour. Competencies are complex combinations of various attributes, many of which being not assessable by objective methods. Assessment of affective and communication domains has always been neglected for want of objective methods. Areas like professionalism, ethics, altruism, and communication—so vital for being an Indian Medical Graduate, can be assessed longitudinally applying subjective means only. Though subjectivity has often been questioned as being biased, it has been proven time and again that a subjective assessment in expert hands gives comparable results as that of any objective assessment. By insisting on objectivity, we may compromise the validity of the assessment and deprive the students of enriched subjective feedback and judgement also. This review highlights the importance of subjective assessment in competency-based assessment and ways and means of improving the rigor of subjective assessment, with particular emphasis on the development and use of rubrics.


Keywords: Competency-Based Medical Education, objectivity, reliability, rubrics, validity


How to cite this article:
Virk A, Joshi A, Mahajan R, Singh T. The power of subjectivity in competency-based assessment. J Postgrad Med 2020;66:200-5

How to cite this URL:
Virk A, Joshi A, Mahajan R, Singh T. The power of subjectivity in competency-based assessment. J Postgrad Med [serial online] 2020 [cited 2023 Sep 26];66:200-5. Available from: https://www.jpgmonline.com/text.asp?2020/66/4/200/297502





 :: Introduction Top


It is generally acknowledged that assessment drives learning; however, assessment can have both intended and unintended consequences.[1] What and how students learn depends largely on how they think they will be assessed; hence, assessment should become a strategic tool for enhancing teaching and learning in higher education.[2]

Competency-Based Medical Education (CBME) entails the attainment of observable abilities by students in a learner centered manner with emphasis on outcomes considered relevant to the daily practice of medicine. Much-awaited implementation of competency-based Indian undergraduate medical curriculum has commenced from session 2019. This paradigm change has shifted the focus towards application of knowledge in real life situations. Adoption of CBME speaks of a trend in medical education towards framing assessment around general competencies thus reflecting a larger, more comprehensive idea of physician competence as seen through the eyes of the society.[3]

This shift towards CBME has been a challenge for medical educators for developing new methods of teaching and assessing clinical and professional competence.[4] CBME has been implemented in India at a stage when medical educationists had just begun to understand the use of objectivity and standardized assessment methods. This pursuit for objectivity was also accompanied by a notion that every assessment should be objective for it to be of any value. The tilt towards objectivity in medical education was largely supported by the notion that professional roles can be broken down and operationalized into individual elements of defined knowledge or skills, which even when acquired and assessed independently, and when aggregated can lead to assessment of overall professional competence.[5]

In CBME, learning process is the most crucial focus. Focused approach through reflection and exploration of the learning process with feedback can help develop ability in the sense of performance. Competency-based assessment should also aim to make summative decisions on competence or or lack of it for assessment of learning; thus, mandating the tapping of full potential of “expert subjective judgment” for learners' longitudinal and monitored development.

This article attempts to purposefully and critically discuss the role of subjectivity and expert judgement as an indispensable integrant in assessment in the context of the recent implementation of CBME in Indian undergraduate medical education.


 :: Does Objectivity Really Exist? Top


Presently, it does make one wonder if standardized assessment methods will guarantee similar performance in a real-life situation, inclusive of the variability of actual clinical practice. Medical care demands that the medical graduates should possess the ability to integrate different competencies for optimal patient care.[6] Can entire competence be considered a stacking of individually completed tasks in context to knowledge, skills, attitudes, and communication?

While many clinical tasks can be broken down into a sequence of standardized steps to follow, it is not always possible to break complex skills such as team-collaboration, professionalism, communication, etc., and thus undergo a process of longitudinal development.[7] These skills can be best assessed through direct observation in real-life settings, under non-standardized (or actual) conditions, in which professional, expert subjective judgement becomes imperative.[8] Moreover, once these skills are internalized and process becomes automated, graduates actually learn to skip many steps during diagnosis or treatment, rendering check-list based, objective assessments redundant.

There is nothing completely objective in any assessment; all assessments are guided by the attitudes, values, orientation, and prior experience of the assessor. The checklists drafted for assessment in objectively oriented assessments are largely influenced by subjective opinions. While marking checklists for steps performed or tasks accomplished, no one is going to stop any assessor for marking it wrongly, if the assessor has decided beforehand to be “biased”. Lastly, many steps mentioned in checklists often involve building expert subjective opinions before marking.

The process of blue-printing though introduced to improve the validity of an assessment, again involves expert subjective judgements right from assigning a value of clinical relevance or impact or frequency to drafting of items and keys and distracters for the items.[9] Standard setting itself is a purely experts' subjective decision. Lowering the criteria, if enough candidates have not been able to meet the cut off, like in post-graduate entrance examinations, is not based on any objective method and is a purely subjective decision. What we actually do is to make subjective decisions but try to measure them objectively, a process named objectification by Vlueten, which does not give better results compared to expert subjective opinion.[10]

Performers' variability and heterogeneity needs to be given its due space while assessing clinical competence and as such an inflexible, objective assessment is bound to lose its utility in competency-based assessment. Accordingly, medical educationists have justifiably cautioned against objectifying competency-based assessment.[9],[10],[11] This leaves us with an option of exploring the possibility of applying expert subjective judgement in competency-based curriculum. Before building our case for the importance and power of subjectivity in competency-based assessment, let's discuss briefly the differentiating points between objective and subjective assessment.


 :: Objective Versus Subjective Assessment Top


Objective assessment implies the use of information that is collected through measuring, observing and examining facts, while subjective assessment implies information primarily based on personal views, opinions or value judgements. Expert subjective opinion can rate performance at higher levels of simulation and is often flexible, involves less time, effort, and cost. Such expert subjective judgement can add a consequential flavor to student assessment and is indispensable for effective assessment. The most important point in favor of subjectivity remains that it offers immense opportunity for feedback to the students.[12]

Objective assessments generally use “norm-referenced” approach without any specified criteria, while in subjective assessment the performance of students is generally assessed against a pre-determined criterion and thus follows “criterion-referenced” approach. Objective assessment provides an opportunity to have wide range of sampling of curriculum in a single assessment, within small timeframe. In contrast, expert subjective assessment is often based upon rating the over-all performance at a higher level of simulation over an extended period of observation.[12] In a sense, objective assessment is comparable to “cross-sectional” study while subjective assessment is “longitudinal” in nature.

Objective assessment is often overvalued. The notion of objectivity, even if not entirely wrong, is not flawless. Dividing the whole activity into smaller steps and then assessing objectively, based upon the task performed stepwise and marking the assessed as competent, may be a workable approach in some tasks, but not in all. Consider de-assembling a cycle into various components and then assembling it again—you can do it even after de-assembling the cycle to the last nut-and-bolt; but you can't de-assemble a frog into component parts and then assemble it back to a frog![13] Clinical competence has many tasks which are more than mere stacking of knowledge and skills.

On the other hand, subjective assessment is often undervalued by suggesting that it is based upon whims and fancies of the assessor. This is not true. For any given task, the internal consistency of scores of subjective ratings may be low, but it always shows a higher consistency across the tasks compared to objective assessment.[14] Additional advantage is its continuous, longitudinal nature providing immense feedback opportunities. The utility of observation based and considered, expert subjective judgement as part of assessment in competency-based assessment can neither be ignored nor challenged.


 :: Expert Subjective Judgement in Competency-Based Assessment Top


Some of the inherent characteristics of competency-based assessment are its continuous nature, based upon direct observation over a period of time, focusing on criterion-referenced approach, and having plenty of feedback opportunities for midcourse correction. Let's have a relook on some of the characteristics of expert subjective assessment, as delineated in sections above. Expert subjective assessment is criterion-referenced based, and opinions are built over period after supervision and observation. That also gives us many formative assessment opportunities [Figure 1]. Both “subjective assessment” and “competency-based assessment” synchronize well in terms of their approach.
Figure 1: Identical characteristics of competency based and expert subjective assessment

Click here to view


Competency-based assessment implies that the domains which have not been assessed till now in medical graduates for want of objective methods like empathy, professionalism, ethics, and other soft-skills should also be assessed. Expert subjective judgement with global ratings provides an opportunity to assess them too. In fact, subjective assessment is the key to assessing professional competence, particularly in areas such as teamwork, professionalism, communication, criticality, reflexivity, and ethics, and can't merely be discarded by tagging it as “biased”. These values underpin the intrinsic nature of medical profession and demand subjective judgement that goes beyond technical proficiency.[12]

Use of multiple assessors in subjective assessment provides judgement as well as feedback from multiple experts. This not only makes the feedback richer but also increases the reliability of the subjective assessment.[9] Subjectivity also provides enough room for work-place based assessment and that in turn empowers the competency-based assessment.

The areas authorizing use of subjective assessment and its potential utilization in student's assessment and monitoring longitudinal professional development have been detailed in Box 1 (the list is not exhaustive).[4]

Box 1: Potential utilization of subjective assessment

  • To make available learning opportunities which may serve as potential assessment opportunities, especially for competence dimensions covering emotions and values
  • To ensure that a key feature of assessment is meaningful feedback to promote attainment of predefined professional competencies
  • To provide students with a chance to appreciate their learning and longitudinal competency development
  • To encourage reflective practice and self-directed learning activities
  • To empower faculty with capacity to utilize assessment opportunities and enable them to make fair, justifiable, clear, learner-centered decisions.


Subjective assessment ratings are easy to build, not resource intensive and effective cost involved is much lesser compared to objective assessment. Medical education, both at undergraduate and post-graduate level can get enriched with the use of subjective assessment particularly after the introduction of competency-based curriculum. Though the power of the subjectivity in competency-based assessment can't be denied, the real challenge lies in improving its rigor and acceptability.


 :: Ways to Build Rigor in Subjective Assessment Top


Miller pyramid provides a practical framework to understand assessment at different levels of professional competence. Assessment at the lower levels (predominantly directed at knowledge, application of knowledge and demonstration of skills) is more or less accepted as “established,” while assessment at the highest or “does” level (predominantly directed towards direct observation in the workplace) is still “evolving”.

Assessment of competence is based upon expert observation and subjective judgement to a large extent. The value of subjective assessment using expert judgement can be improved by—including multiple contexts and assessors, because many subjective judgements help in drawing a firm inference on the aggregated results; by triangulation and saturation of information thus providing the direction for collective decision making; and by using bias-reduction techniques that throw light on the process of decision making.[15] Some other modalities that can help build rigor in subjective assessment have been listed in Box 2.

Box 2: Modalities to build rigor in subjective assessment

  • Adopt Workplace Based Assessment (WPBA) for assessment of competencies like teamwork, organizational skills, and professional behavior
  • Incorporate a system facilitating Multisource Feedback (MSF)—feedback on routine performance from experts, peers, other health professionals, and patients
  • Promote meaningful student-teacher interactions through periodic small group sessions and periodic mentor-mentee meets
  • Build trust between teacher and student
  • Introduce social interactions to serve as framework for fostering self-directed learning.


Workplace-Based Assessment (WPBA) refers to a group of assessment techniques that assess students' performance in clinical settings.[16] The sheer strength of WPBA is its observation-based formative potential allowing the learner to navigate her learning towards the desired learning objectives.[17] WPBA tools involve subjective assessment in one or the other form, are more reliable, and due to their features are highly acceptable.

Interaction with other members of the health team is also best assessed at the workplace. This interaction fosters development in the difficult to define, domain-independent competencies, such as professionalism and communication skills. These competencies can gain much from direct observation and enriched feedback. It also offers opportunities to observe and assess the student on a day-to-day basis with less chances of allowing students to mask their behavior. In comparison to traditional assessments which are often opportunistic with non-representative sampling, direct observation allows for a better sampling of work that physicians actually do rather than assessing them on patients whom they are unlikely to handle professionally.

Trust and confidentiality are vital to the success of multi-source feedback. It can be said that feedback is based on assessment and student-teacher interaction promotes effective feedback. Studies have established that narrative, descriptive, and linguistic information is often much richer and more appreciated by learners.[18] Where number scores may fail to divulge what the learner actually did and what she should do to improve, a qualitatively meaningful feedback can strengthen the process of assessment manifold.

We need to contemplate carefully about the ways in which subjective judgements should be secured, analyzed, and aggregated. Also, educators must search for ways to put together subjective data comparable between experts or against a standard for the purpose of high-stake decision making. One of the ways is by designing rubrics for such expert judgement-based assessments.


 :: What Is a Rubric? Top


In graduate medical education, global ratings of resident physicians by faculty is one of the most widely used method of assessment. Yet research regarding such rating forms, shows wide variability in validity and reliability. Likert-type rating-scale assessments that consist of numeric ratings, when accompanied by qualitative labels, such as competent or not competent, often yield scores that are subjectively derived with limited value in formative assessment because they lack detailed requirements of performance expectations and behavioral descriptions for each domain.[19],[20]

Rubric, with its detailed qualitative description about each rating solves the problem. Rubric refers to “performance standard” for a student population. It is a type of scoring tool that contains criteria for performance with descriptions of levels of performance that can be used for performance assessments. Performance tests are generally used to determine if a learner has mastered specific skills, and the instructor typically makes inferences about the level to which the skill has been mastered. Such learner centric assessment tools are meant to augment performance while enriching education through experience, providing a constant review of results achieved vis-a-vis the outcome desired.

The rubric serves as a tool to measure the equivalence, educational effect, catalytic effect, validity evidence, and acceptability of an assessment, along with importance weightings for each item.


 :: What, Why and How of Rubrics in Assessment Top


Multi-faceted qualitative rubric assessments work best for assessing various professional competencies. Competency gaps have been documented at every transition from undergraduate to graduate to postgraduate environments. As a result, assessment based on day-to-day activity cannot be ignored. Assessment needs to be a part and parcel of instruction rather than being an appendage of the process.[21] The main issue is whether the standardization and objectivity of evaluations are reliably maintained in a complex, simulated, clinically relevant and a contextually appropriate setting.[22]

Using Rubrics in day to day assessments can help extract pointed information about student understanding at the workplace. Rubrics emphasize the use of experts to evaluate performance assessment, of the complex multi-faceted characteristics of the tasks undertaken.[22]

Demonstration of good clinical reasoning and reflection, communication, consulting skills, good team working and continuing development are some of the important attributes of professional competencies, which cannot be adequately gathered through objective tools or check lists only. The evidence-set appropriate for the demonstration of competence therefore needs to be broader and qualitatively much more developed than might be supposed.[23]

Outcome based assessment rubric is a novel systematic instrument for documenting improvement in clinical learning.[24] Clinical evaluation remains challenging to even the most seasoned faculty and rubrics is one such tool that can provide a learner-centred assessment approach that focuses on encouraging behavioural change in learners besides improving the value and power of the subjective assessment.

Evidence suggests that focusing more on learner-centric assessments tends to specifically help low-achieving students, thereby improving the overall learning. The literature suggests that rubrics are being used widely in the field of medicine, if not by a large number of instructors.[25] It is one of the effective methods for formalizing and making a subjective assessment more reliable across different assessors.[22],[24] Educators from different parts of the world are supporting the use of rubrics as a tool for subjective assessment of - clinical skills, clinical reasoning, performance based assessment, surgical procedures, measuring alignment of entrustable professional activity, surgical competency, measurement of uterine compression sutures technical skills, reflective writing, critical thinking, communication skills, and interprofessional skills.

An elaboration on how one can use rubric in a clinical setting can be retrieved from https://medschool.ucla.edu/workfiles/site-current/policies/ClinicalGradingRubric201819.pdf. Herein the rubric grid demonstrates a range of professional competencies like history taking skills, physical examination, communication skill, professionalism and ethics. These competencies are described in the form of individual criteria. Each criterion mentioned has a specific significance and a description in accordance with level of achievement, thus helping the learners to be more pragmatic about their professional skills. For an example, in evaluating a competency in history taking skills, level 1 depicts that the trainee often misses key information, level 2 depicts that learner is able to gather a complete medical history, level 3 means he consistently gathers a complete and accurate history and finally level 4 describes that he excels in gathering a complete, accurate and relevant history. Description is there under each level for easy comprehension.

Rubrics can be used as vital tools in CBME for improvement in assessment, feedback and learning and to solve the problems of subjectivity in assessment. In addition, they also allow coherence and consistency in assessment. A properly devised rubric can help students to recognize and assess their strengths and weaknesses. The characteristics of a good, effective rubric recommended for subjective judgement in medical education have been detailed in Box 3.

Box 3 Characteristics of good, effective rubric

A good, effective rubric must:

  • Clearly specify the criteria and constitution of various levels of performance for those criteria
  • Include descriptions of each criterion for each level of performance
  • Clearly define task description, scale of achievement, dimensions and description of dimensions
  • Help to examine how well students have met learning outcomes rather than how well they perform compared to their peers.



 :: How to Construct a Rubric? Top


Prior to developing a rubric, one needs to write clearly the description of the procedure or skill to be assessed. This description may be incorporated into the rubric itself or exist as a separate document. Beyond this, a rubric generally consists of three main levels, which compose a rubric grid[26],[27] [Figure 2].
Figure 2: Three main levels of a rubric grid

Click here to view


This description defines the behaviours that differentiate between the “excellent” level versus the “needs improvement” level versus the “critical error” level. When developing the rubric, it is imperative to decide how many performance levels are adequate for each skill in the scale.


 :: the Way Forward Top


With the introduction of CBME, medical education has entered a 'post-psychometric' phase of assessment, with very diverse domains of expertise. A renewed interest in the revival of subjective judgement in assessment puts forth some interesting questions about what judgement is and how to rationally aggregate multiple judgements without compromising on the abundant expert perspectives. Studies have shown that subjective assessment can be used with some degree of objectivity for continuous assessment.[28]

Medical Council of India (MCI) module on assessment for undergraduate medical education lays stress on ongoing developmental feedback, direct observation, multiple assessors and use of multiple tools for students' assessment under competency based curriculum, while advocating the need to implement low-stake assessment system at the institutional level.[29] The same has been emphasized in literature.[4],[12],[30] The big challenge that lies ahead is to understand how subjectivity in assessment can be reintroduced while retaining 'rigour' in assessment.

As argued by Hodges, clinical assessment of students can be compared to clinical judgement, “With experience, expert clinicians become more rapid and more accurate in their recognition of patterns. There is no reason to believe that this process does not operate in education”.[31]


 :: Conclusion Top


Rotthoff' highlighted the role of subjectivity in the assessment of competencies by stating, “we run the risk of investing our resources in the best possible standardization of exams or perfecting checklists and scales in context to outcome based CBME rather than focussing on importance of 'learning process' which is crucial for Competency- Based Education”.[32]

What we need to do is to look for the ways and means of improving acceptability of expert subjective assessment method, which is valid given the context, which is feasible and quicker, which is based upon observation and which has high educational impact.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
 :: References Top

1.
Schuwirth LW, van der Vleuten CP. Merging views on assessment. Med Educ 2004;38:1208-10.  Back to cited text no. 1
    
2.
Pereira DA, Flores M, Niklasson L. Assessment revisited: A review of research in assessment and evaluation in higher education. Assess Eval High Educ 2015;41:1008-32.  Back to cited text no. 2
    
3.
Lurie SJ, Mooney CJ, Lyness JM. Pitfalls in assessment of competency based educational objectives. Acad Med 2011;86:412-4.  Back to cited text no. 3
    
4.
Bok HG, Teunissen PW, Favier RP, Rietbroek NJ, Theyse LFH, Brommer H, et al. Programmatic assessment of competency-based workplace learning: When theory meets practice. BMC Med Educ 2018;13:123.  Back to cited text no. 4
    
5.
Brightwell A, Grand J. Competency based training: Who benefits? Postgrad Med J 2013;89:107-10.  Back to cited text no. 5
    
6.
ten Cate O, Snell L, Carracio C. Medical competence: The interplay between individual ability and the health care environment. Med Teach 2010;32:669-75.  Back to cited text no. 6
    
7.
Loftus S. Competencies in medical education: A trap for the unwary. Med Sci Educ 2016;26:499-502.  Back to cited text no. 7
    
8.
van der Vleuten CP, Schuwirth L, Driessen EW, Govaerts MJB, Heeneman S. 12 Tips for programmatic assessment. Med Teach 2015;37:641-6.  Back to cited text no. 8
    
9.
ten Cate O, Regehr G. The power of subjectivity in the assessment of medical trainees. Acad Med 2019;94:333-7.  Back to cited text no. 9
    
10.
van der Vleuten CP, Norman GR, de Graaff E. Pitfalls in the pursuit of objectivity: Issues of reliability. Med Educ 1991;25:110-8.  Back to cited text no. 10
    
11.
Lurie SJ. History and practice of competency-based assessment. Med Educ 2012;46:49-57.  Back to cited text no. 11
    
12.
Singh T. Student assessment: Issues and dilemmas regarding objectivity. Natl Med J India 2012;25:287-90.  Back to cited text no. 12
    
13.
Schuwirth L, Ash J. Assessing tomorrows learners: In competency-based education only a radically different holistic method of assessment will work. Six things we could forget. Med Teach 2013;35:555-9.  Back to cited text no. 13
    
14.
Keynan A, Friedman M, Benbassat J. Reliability of global rating scales in the assessment of clinical competence of medical students. Med Educ 1987;21:477-81.  Back to cited text no. 14
    
15.
van der Vleuten CP, Schuwirth L, Scheele F, Driessen EW. The assessment of professional competence: Building blocks for theory development. Best Pract Res Clin Obstet Gynaecol 2010;24:703-19.  Back to cited text no. 15
    
16.
Singh T, Sood R. Workplace-based assessment: Measuring and shaping clinical learning. Natl Med J India 2013;26:42-6.  Back to cited text no. 16
    
17.
Norcini J, Burch V. Workplace based assessment as an educational tool: AMEE Guide No. 31. Med Teach 2007;29:855-71.  Back to cited text no. 17
    
18.
Govaerts MJ, van der Vleuten CP, Schuwirth W, Muijtjens AM. The use of observational diaries in in-training evaluation: Student perceptions. Adv Health Sci Educ Theory Pract 2005;10:171-88.  Back to cited text no. 18
    
19.
Lynch DC, Swing SR, Horowitz SD, Holt K, Messer JV. Assessing practice-based learning and improvement. Teach Learn Med 2004;16:85-92.  Back to cited text no. 19
    
20.
Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global rating forms enable program directors to assess the ACGME competencies? Acad Med 2004;79:549-56.  Back to cited text no. 20
    
21.
Singh T. Student assessment: Moving over to programmatic assessment. Int J Appl Basic Med Res 2016;6:149-50.  Back to cited text no. 21
    
22.
Yune SJ, Lee SY, Im SJ, Kam BS, Baek SY. Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students. BMC Med Educ 2018;18:124.  Back to cited text no. 22
    
23.
Rughani A. Workplace-based assessment and the art of performance. Br J Gen Pract 2008;58:582-4.  Back to cited text no. 23
    
24.
Boateng BA, Bass LD, Blaszak RT, Farrar HC. The development of a competency-based assessment rubric to measure resident milestones. J Grad Med Educ 2009;1:45-8.  Back to cited text no. 24
    
25.
Reddy YM, Andrade H. A review of rubric use in higher education. Assess Eval High Educ 2010;35:435-48.  Back to cited text no. 25
    
26.
Moskal BM. Scoring rubrics: What, when and how? Pract Assess Res Eval 2000;7:3.  Back to cited text no. 26
    
27.
Popham WJ. What's wrong and what's right with rubrics. Educ Leadersh 1997;55:72-5.  Back to cited text no. 27
    
28.
Inayah AT, Anwer LA, Shareef MA, Nurhussen A, Alkabbani HM, Alzahrani AA, et al. Objectivity in subjectivity: Do students' self and peer assessments correlate with examiners' subjective and objective assessment in clinical skills? A prospective study. BMJ Open 2017;7:e012289.  Back to cited text no. 28
    
29.
Medical Council of India. The regulations on graduate medical education, 1997 – Part II, 2019. Available from: https://www.mciindia.org/ActivitiWebClient/open/getDocument?path=/Documents/Public/Portal/Gazette/GME-06.11.2019.pdf. [Last cited on 2020, May 17].  Back to cited text no. 29
    
30.
Lockyer J, Carraccio C, Chan M, Hart D, Smee S, Touchie C, et al. Core principles of assessment in competency-based medical education. Med Teach 2017;39:609-16.  Back to cited text no. 30
    
31.
Hodges B. Assessment in the post psychometric era: Learning to love the subjective & collective. Med Teach 2013;35:564-8.  Back to cited text no. 31
    
32.
Rothoff T. Standing up for subjectivity in the assessment of competencies. GMS J Med Educ 2018;35:Doc29.  Back to cited text no. 32
    


    Figures

  [Figure 1], [Figure 2]

This article has been cited by
1 Learning and assessment strategies to develop specific and transversal competencies for a humanized medical education
Antonio S. Tutor, Esther Escudero, María del Nogal Ávila, Juan Francisco Aranda, Hortensia Torres, Josué G. Yague, María José Borrego, Úrsula Muñoz, María C. Sádaba, Isabel Sánchez-Vera
Frontiers in Physiology. 2023; 14
[Pubmed] | [DOI]
2 Looking for experimental evidence of critical thinking through EEG
Hugo G. González-Hernández, José M. Medina-Pozos, Valeria Cantú-González, Adriana Amozurrutia-Elizalde, A. Flores-Amado, Roberto J. Mora-Salinas
International Journal on Interactive Design and Manufacturing (IJIDeM). 2021; 15(2-3): 333
[Pubmed] | [DOI]



 

Top
Print this article  Email this article
 
Online since 12th February '04
© 2004 - Journal of Postgraduate Medicine
Official Publication of the Staff Society of the Seth GS Medical College and KEM Hospital, Mumbai, India
Published by Wolters Kluwer - Medknow