Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & ISI's SCI  
Users online: 4681  
Home | Subscribe | Feedback | Login 
About Latest Articles Back-Issues Articlesmenu-bullet Search Instructions Online Submission Subscribe Etcetera Contact
 
  NAVIGATE Here 
  Search
 
  
 RESOURCE Links
 ::  Similar in PUBMED
 ::  Search Pubmed for
 ::  Search in Google Scholar for
 ::  Article in PDF (220 KB)
 ::  Citation Manager
 ::  Access Statistics
 ::  Reader Comments
 ::  Email Alert *
 ::  Add to My List *
* Registration required (free) 

  IN THIS Article
 ::  References

 Article Access Statistics
    Viewed992    
    Printed38    
    Emailed0    
    PDF Downloaded15    
    Comments [Add]    

Recommend this journal


 


 
  Table of Contents     
LETTER
Year : 2023  |  Volume : 69  |  Issue : 2  |  Page : 122-123

Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?


Biostatistics Consultant, Max Healthcare, New Delhi, India

Date of Submission18-Oct-2022
Date of Acceptance21-Dec-2022
Date of Web Publication17-Mar-2023

Correspondence Address:
A Indrayan
Biostatistics Consultant, Max Healthcare, New Delhi
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpgm.jpgm_828_22

Rights and Permissions




How to cite this article:
Indrayan A. Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?. J Postgrad Med 2023;69:122-3

How to cite this URL:
Indrayan A. Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important?. J Postgrad Med [serial online] 2023 [cited 2023 Jun 5];69:122-3. Available from: https://www.jpgmonline.com/text.asp?2023/69/2/122/371929




I must make it absolutely clear at the outset that I am no fan of western thinkers but I firmly believe in Kelvin's dictum “If you cannot measure it, you cannot improve it.” Many of us unknowingly use qualitative assessment for something that is inherently quantitative. In fact, the term “quality” itself is inherently quantitative–it is either ordinal or metric but not nominal. Blood group and site of cancer are the examples of nominal characteristics since none is better or more than the other, and nobody uses the term quality for such characteristics nor the term assessment is appropriate for such characteristics. Nominal characteristics cannot be measured, although the term measurement is sometimes generically used. This background information is needed to appreciate my following observations on the article by Singh and Shah[1] on competency-based medical education in the journal. The authors disfavour quantitative measurement and advocate qualitative assessment of competency.

I am amused once again to see the debate on qualitative- quantitative and subjective-objective measurements raised by Singh and Shah in their article. The debate is going on infinitum and will continue, with experts on both sides of the fence articulating their views on pros and cons. This reminds of a debate on measurement in medicine[2] as some sought to preserve medicine as a 'science' based on expertise rather than depending on the 'mundane' measurements. Today, hardly any 'expert' would practice medicine without taking help of measurements of one kind or the other.

The article by Singh and Shah uses the term 'assess' (or its variant) 93 times and 'qualitative' 23 times. These are the cliche terms for them but they betray the original thought because both are inherently quantitative. Without quantity, how can one say that one is better or more than other? Several other terms used in the article, such as important, improvement, competency, richness of information, and performance, are all relative in nature and relativity exist only because of underlying continuum.

The problem is not with numbers as made out by the authors but is with our inability to develop suitable metrics to measure the so called 'abstracts'. In the context of this article, it is our incompetency in developing appropriate measure of competency. When the authors give the example of the McNamara fallacy, they unnecessarily assign it to the numbers. It actually is in identifying what to measure and how. This is where McNamara failed at the time of Vietnam war. Numbers give the exactitude that ordinal measures cannot provide and we adopt such ordinals because of our inability to assign exact numbers. Even for assigning grades to the students, which the authors advocate, requires some measurement. Grades are the classical example of our failure to be confident of the exact marks we award to the students. Grades too require a cut-off, howsoever invisible, so as to distinguish grade A from grade B. Human endeavours are seldom perfect but that does not take us away from making more intensive efforts to improve. Numbers are not only going to stay but would steadfastly make more in-roads by measuring the currently immeasurable.

Height and weight can be accurately measured given the correct instrument. We are yet to develop such an instrument for measuring several characteristics including competency. Are we trying to fly under the radar by blaming numerics and trying to hide our incompetency to measure competency? Would not we like to know who is more competent than the other or at least who is sufficiently competent to get a 'pass' grade? Even the dichotomy in this case (this is not like male-female dichotomy) requires assessment and quantitative thinking with a cut-off to say that a student has attained 'sufficient' competency. The cut-off may not be formal but must exist in mind for making such categorization.

A series of appropriately drawn questions, such as we already have for assessing quality of life, or for assessing performing activities such as diagnosis, is a workable option available today. Efforts on these lines are already going on to quantify the medical competency,[3],[4] and they need to be encouraged. Yes, they are surrogates, but we have to start from somewhere. Devising or developing valid metric for measuring competency may look insurmountable today but is not intractable in the long run. I wish the authors emphasized this in place of deprecating quantitative measurements.

As the science advances, there is no escape from quantitative measurements. Medical science has quickly grabbed and assimilated the facility of laboratory measurements. A large number of scoring systems and indexes are being developed and increasingly used. The probability of various outcomes is being assessed with statistical models. Increasing use of such tools suggests that they have been found useful. These are now pervading so much that statistical medicine has been proposed as a medical specialty on the lines of laboratory medicine.[5] In the face of volatility of humans and their varying competencies, we do need objectivity as much as possible. Qualitative-quantitative debate is raised when we fail to develop appropriate measuring tools.

While advocating qualitative assessment of competency, the authors assume that all assessors are sufficiently competent to assess students' competency. I wish this is true. We have all kinds of teachers, some more competent than the others. Even more difficult is to ensure equal competency, which would be a prerequisite for uniform assessment. This difficulty possibly is more pronounced in purported qualitative assessment because perception of the assessors may have a larger role in the qualitative assessment than the quantitative assessment.

Luckily, the authors are not pleading to discard the quantitative assessment. They are proposing that qualitative assessment must also be done for evaluating competency. But it is not clear that they want qualitative assessment to be an adjunct to quantitative measurement or quantitative adjunct to the qualitative assessment. Which would be more dependable in their opinion and why? The whole article seems to be inclined towards qualitative assessment.

In the end, I would like to reiterate that the fallacies referred to by the authors do not occur due to quantitative assessment but because we are unable to identify what exactly is to be measured and unable to develop valid tools to measure what we intend to measure. The focus should be on development of appropriate quantitative tools rather than on deprecating quantitative measurements. Till the time these are developed, purely as a temporary measure, we use so-called qualitative assessment but in full realization that they are more severely affected by the assessor's knowledge and perception than the quantitative assessment.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
 :: References Top

1.
Singh T, Shah N. Competency-based medical education and the McNamara fallacy: Assessing the important or making the assessed important? J Postgraduate Med 2023;69:35-40.  Back to cited text no. 1
    
2.
Dale H. Measurement in medicine; introduction. Br Med Bull 1951;7:261-3.  Back to cited text no. 2
    
3.
Van Heest AE, Armstrong AD, Bednar MS, Carpenter JE, Garvin KL, Harrast JJ, et al. American Board of Orthopaedic Surgery's initiatives toward competency-based education. JB JS Open Access 2022;7:e21.00150.  Back to cited text no. 3
    
4.
Harrington KL, Teramoto M, Black L, Carey H, Hartley G, Yung E, et al. Physical therapist residency competency-based education: Development of an assessment instrument. Phys Ther 2022;102:pzac019.  Back to cited text no. 4
    
5.
Indrayan A. Statistical medicine: An emerging medical specialty. J Postgrad Med 2017;63:252-6.  Back to cited text no. 5
[PUBMED]  [Full text]  




 

Top
Print this article  Email this article
 
Online since 12th February '04
© 2004 - Journal of Postgraduate Medicine
Official Publication of the Staff Society of the Seth GS Medical College and KEM Hospital, Mumbai, India
Published by Wolters Kluwer - Medknow