| Article Access Statistics|
| Viewed||1104 |
| Printed||19 |
| Emailed||0 |
| PDF Downloaded||25 |
| Comments ||[Add] |
Click on image for details.
|Year : 2015 | Volume
| Issue : 3 | Page : 213-214
KK Deodhar1, B Rekhi1, S Menon1, B Ganesh2
1 Department of Pathology, Biostatistics and Epidemiology, Tata Memorial Hospital, Mumbai, Maharashtra, India
2 Department of Medical Records, Biostatistics and Epidemiology, Tata Memorial Hospital, Mumbai, Maharashtra, India
|Date of Web Publication||26-Jun-2015|
K K Deodhar
Department of Pathology, Biostatistics and Epidemiology, Tata Memorial Hospital, Mumbai, Maharashtra
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Deodhar K K, Rekhi B, Menon S, Ganesh B. Authors' reply. J Postgrad Med 2015;61:213-4
We thank Dr. Raina  for the response to our paper  where he has addressed our audit process and compared it with previous audit reports. ,, Raina has questioned the purely descriptive nature of our audit and stated that it is inefficient and lacks educational value. However, Campbell et al.  made this statement in relation to a trial audit done in their department in 2% of the randomly selected cases. A wide range of features ranging from macroscopic description to technical quality were assessed. These were scored as satisfactory, borderline, or unsatisfactory. The authors opined that such a procedure was inefficient and of limited educational value. In particular, there were problems in agreeing on the criteria for a satisfactory report, and scoring was therefore subjective and arbitrary. This lack of confidence in the data meant that there was no mechanism to close the "audit cycle."
Our study was different in that it assessed the compliance of contents of a specified group of cases (histopathology reports of carcinoma endometrium) with the required contents that are well-defined in literature. Hence, we feel the above mentioned statement may not be applicable for our study. The 2% random checking of the reports by two senior consultants within the department was proposed and implemented by Zuk et al. in 1991  as an internal quality exercise in histopathology. We accept that our method has been descriptive with the lack of a formal kappa score by the authors. Our aim, however, was to assess the compliance with a particular reporting pattern in a group of 13 pathologists who are generalists and not to assess interindividual variability. So "good" tends to be not a statistical statement but rather a measured interpretation of a generalist. Raina has alluded to references largely from the UK where these reviews are well entrenched in routine practice.  Most practices in UK universities' hospitals have gone the subspecialty way (vertical split) in early 2000 and this perhaps could be the reason why similar reports (about generalists reporting) in UK journals are less at present. Our broad aim was to strive for consistency in reporting cancer histopathology within the department. With regard to closure of the audit cycle, we need to see reported contents at a later date in order to look for changes for the better. We could also address the possibility and logistics in reporting only subspecialty, which could bring better compliance for minimum data sets in cancer histopathology reporting.
| :: References|| |
Raina S. Performing audit in histopathology. J Postgrad Med 2015;61:213.
Deodhar KK, Rekhi B, Menon S, Ganesh B. An audit of histopathology reports of carcinoma endometrium: Experience from a tertiary referral center. J Postgrad Med 2015;61:84-7.
Campbell F, Griffiths DF. Quantitative audit of the contents of histopathology reports. J Clin Pathol 1994;47:360-1.
Zuk JA, Kenyon WE, Myskow MW. Audit in histopathology: Description of an internal quality assessment scheme with analysis of preliminary results. J Clin Pathol 1991;44:10-6.
Appleton MA, Douglas-Jones AG, Morgan JM. Evidence of effectiveness of clinical audit in improving histopathology reporting standards of mastectomy specimens. J Clin Pathol 1998;51:30-3.