Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & ISI's SCI  
Users online: 4881  
Home | Subscribe | Feedback | Login 
About Latest Articles Back-Issues Article Submission Resources Sections Etcetera Contact
 
  NAVIGATE Here 
  Search
 
  
 RESOURCE Links
 ::  Similar in PUBMED
 ::  Search Pubmed for
 ::  Search in Google Scholar for
 ::Related articles
 ::  Article in PDF (442 KB)
 ::  Citation Manager
 ::  Access Statistics
 ::  Reader Comments
 ::  Email Alert *
 ::  Add to My List *
* Registration required (free) 

  IN THIS Article
 ::  Abstract
 :: Introduction
 ::  Materials and Me...
 :: Results
 :: Discussion
 :: Acknowledgments
 ::  References
 ::  Article Figures
 ::  Article Tables

 Article Access Statistics
    Viewed2148    
    Printed65    
    Emailed0    
    PDF Downloaded28    
    Comments [Add]    

Recommend this journal


 


 
  Table of Contents     
ORIGINAL ARTICLE
Year : 2014  |  Volume : 60  |  Issue : 4  |  Page : 362-365

Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: A systematic review


1 Department of Pharmacology, GMERS Medical College, Patan, Gujarat, India
2 Department of Epidemiology, Indian Institute of Public Health, Gandhinagar, Gujarat, India

Date of Submission01-Mar-2014
Date of Decision15-Apr-2014
Date of Acceptance07-Sep-2014
Date of Web Publication5-Nov-2014

Correspondence Address:
Dr. J Charan
Department of Pharmacology, GMERS Medical College, Patan, Gujarat
India
Login to access the Email id

Source of Support: Indian Council of Medical Research (Grant No. 2012-01390),, Conflict of Interest: None


DOI: 10.4103/0022-3859.143954

Rights and Permissions


 :: Abstract 

Objectives: Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Design: Systematic review. Materials and Methods: All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Results: Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Conclusion: Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.


Keywords: Negative studies, power calculation, sample size calculation, statistical tests


How to cite this article:
Charan J, Saxena D. Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: A systematic review. J Postgrad Med 2014;60:362-5

How to cite this URL:
Charan J, Saxena D. Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: A systematic review. J Postgrad Med [serial online] 2014 [cited 2019 Nov 18];60:362-5. Available from: http://www.jpgmonline.com/text.asp?2014/60/4/362/143954



 :: Introduction Top


Studies that do not find a statistically significant difference in a variable of interest are known as negative studies. A negative study is likely to prevent further research in that area. Thus, published negative studies should be methodologically strong as poor methodology may harm by preventing further research. [1] Negative results between group analyses could result from lack of a true difference (true negative), inadequate sample size (underpowered), or the use of ineffective or insensitive tools to measure outcomes and hence the design becomes very important. [2] In addition, its publication should include all parameters relevant to design and analysis so that the reader can appraise methodological quality and hence the validity of results. [3] Inadequate reporting of methodological and statistical parameters in the articles published in international as well as Indian medical journals has already been reported. [4],[5],[6],[7] Of the few articles published regarding quality of reporting of negative studies, a majority reported on power and sample size alone. [3],[8],[9] The present study was designed to critically evaluate negative studies published in prominent Indian medical journals.


 :: Materials and Methods Top


Site and time period and design

Department of Pharmacology, Govt. Medical College, Surat (Gujarat), India, from July 2012 to February 2013. A systematic review based on negative studies published in Science citation indexed medical journals.

Methodology

A two-stage Delphi technique was utilized to develop a consensus on the tool to be used for data collection. This was undertaken amongst journal editors, peer reviewers, member of academia and researchers from Gujarat state and other part of India. [10] The first stage involved a training program cum workshop for discussion of various methodological issues related to negative studies and developing the tool for data collection. Consensus was built on inclusion criteria of journals and articles, criteria for labeling any study as a negative study and other parameters to be included in the tool. The workshop resulted in a first draft, which was resent to the members by email to develop a final consensus.

Choice of journals

The consensus reached included all original articles from India published in Science citation indexed (SCI) journals since the year 2000. The Journals which were finally included as per the eligibility criteria are mentioned in [Table 1]. Articles published in these journals between years 2000 and 2011 were downloaded and manually searched for eligibility. Only original articles were considered for evaluation; no short communications/brief reports or letter to editors were included.
Table 1: SCI indexed Indian medical journals included in the study with their impact factors

Click here to view


Case definition of a negative study

This was defined as one where:

  1. The difference between primary endpoints was reported as not significant,
  2. If no primary endpoint was reported, then it was labeled a negative study if the outcome used for sample size calculation was not found to be significantly different between groups and
  3. If there was no sample size calculation and no primary end-point reported, then the study was labelled negative if the difference in the first endpoint reported in the abstract was found to be non-significant.


Methodology

First author manually searched the original articles and downloaded negative studies based on above criteria. All selected articles were assessed for reporting of sample size, method for sample size calculation, methods for reducing bias (randomized/non-randomized), descriptive and inferential statistics, reporting of P value and confidence interval, power either in the design phase or a post hoc power analysis, reporting of adjustment of multiple endpoints and reporting of missing data and so on. Where applicable, parameters such as reporting of blinding, allocation concealment and analysis of missing data were also evaluated. We also decided to record/document methodological issues observed by the investigator during the analysis of articles (if such issues were found worthy of being mentioned).

Statistics

Descriptive statistics (frequencies and percentages) were used. A comparison was made between journals on the basis of 'impact factor' more than 1 and less than 1 and between study designs by Fishers' exact test. Data was analyzed by Open Epi version 3.01.


 :: Results Top


Demographics

A total of 15 journals were included of which 4/15 (Indian Journal of Medical Research, Indian Journal of Ophthalmology, Indian Pediatrics and the Journal of Postgraduate Medicine) had an impact factor greater than 1. A total of 7566 original articles were published between year 2000 and 2011 and 300/7566 (3.96%) were negative studies that were selected. [See [Figure 1] PRISMA Chart] A majority (279) were human studies, 19 animal and 2 pertained to microbiology. The latter were excluded from analysis. There were 55 clinical trials.
Figure 1: PRISMA flow chart

Click here to view


Reporting of methodology

Of the 279, power was mentioned in 33 (11.8%) studies and confidence interval in 44 (15.7%) studies. Method of calculation of sample size was reported in 37 (13.2%) studies. Methods for reducing bias (use of randomization) was reported in 142 (50.8%) studies. Of the 142, the exact process (random number table or computer generated) was mentioned in 62/142 (43.6%) studies. The nature of statistical tests used to analyze the endpoint was mentioned in 137 (49.1%) studies. P values were reported in 208 (74.5%) studies. Of these, exact P values were reported in 133 (63.9%) studies. In 72 (34.6%) articles cut off values were reported and in 3 articles, a P value was reported as "NS." Multiple point adjustment was reported in only three (1%) studies and post hoc power was reported in only six (2.1%) studies.

Sub group analysis of clinical trials

Of the 55 clinical trials, blinding was reported in 35 (63.6%) trials, and allocation concealment was mentioned in 18 (32.7%) trials. Six of the 55 trials had no missing data or drop outs. Of the remaining 49, "missing data analysis" was reported in 16 (32.6%). Intention to treat analysis was reported in 13 clinical trials, out of 16 which reported missing data analysis. There was no significant difference between reporting of trials between journals with impact factor less than 1 and more than 1.

Comments on power, sample size calculation, methods to reduce bias, reporting of exact process of sampling and reporting of P values were reported more frequently in clinical trials relative to observational studies [Table 2]. Journals with impact factor greater than 1 had better methodology reporting [Table 2].
Table 2: Reporting of various methodological and statistical parameters in negative studies published in Indian medical journals

Click here to view



 :: Discussion Top


The present study was designed with the objective to critically evaluate negative studies published in prominent medical journals of India. It was observed that these statistical and methodological parameters were reported poorly in the published negative studies. Reporting of a majority of parameters was more frequent in clinical trials as compared to other study designs and also in journals having impact factor more than one compared to journals with an impact factor less than 1. Similar reports have been published elsewhere. [3] When compared to a similar study by Herbert, et al. (2002), there appears to be lower reporting in Indian biomedical journals. [3] These results are also close to another study conducted by the authors. [8] The study done by Herbert, et al. was based on high impact factor journals hence such differences in observation are explainable. High impact journals have full time staff, high standard of editorial policies, statistical reviewers, high rejection rates and demanding peer review. [11],[12] In the present study it was also observed that better reporting of methodology was seen in journals with an impact factor greater than one. On comparison with reporting of positive or superiority studies published in journals, frequency of parameters reported in this study is very less. [13],[14] This difference between positive and negative studies may be because negative studies are more frequently published in lower impact factor journals than high impact factor journals. [15]

A negative study can do much harm by halting research. The issue of bias in publication of negative studies is now more important than ever as there is both encouragement and an obligation to report negative research. [16],[17],[18] There are several journals that are dedicated to publishing negative research and ensuring their reporting quality is as good as the quality of positive studies. The present study is limited by exclusions of brief reports and short communications (where reporting of methodology may be constrained by word count) as well as restriction to Indian journals only. Another limitation is the choice of SCI indexed journals only as inclusion of all Indian journals would have made the study impractical. In addition, the case definition of a negative study, in particular the third point was one made by the authors in the absence of a standard definition. [3],[19],[20] Also, it is possible that authors may have followed adequate methodology, but simply failed to report it. Findings of this study may help ensure better quality of reporting by Indian authors and Indian journals.


 :: Acknowledgments Top


We would like to acknowledge Indian Council of Medical Research for providing us an extra mural grant (Grant no. 2012-01390) for conduction of this study, all the editors, academicians, researchers who helped us in the formation of the proforma. We would also like to thank Dr. Ambuj Kumar, Associate Professor, Centre for EBM, University of South Florida, for help in manuscript writing during the workshop of manuscript writing in Surat (Fogarty training grant - # 1D43TW006793-01A2-AITRP). We are thankful to unknown reviewers for constructive comments on our manuscript.

 
 :: References Top

1.
Sexton SA, Ferguson N, Pearce C, Ricketts DM. The misuse of 'no significant difference' in British orthopaedic literature. Ann R Coll Surg Engl 2008;90:58-61.  Back to cited text no. 1
    
2.
Jayakaran C, Saxena D, Yadav P. Negative studies published in Indian medical journals are underpowered. Indian Pediatr 2011;48:490-1.  Back to cited text no. 2
[PUBMED]    
3.
Hebert RS, Wright SM, Dittus RS, Elasy TL. Prominent medical journals often provide insufficient information to assess the validity of studies with negative results. J Negat Results Biomed 2002;1:1.  Back to cited text no. 3
    
4.
Balasubramanian SP, Wiener M, Alshameeri Z, Tirovoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: Can we do better? Ann Surg 2006;244:663-7.  Back to cited text no. 4
    
5.
Wang G, Mao B, Xiong ZY, Fan T, Chen XZ, Wang L, et al.; CONSORT Group for Traditional Chinese Medicine. The quality of reporting of randomized controlled trials of traditional Chinese medicine: A survey of 13 randomly selected journals from mainland China. Clin Ther 2007;29:1456-67.  Back to cited text no. 5
    
6.
Jaykaran G, Kantharia ND, Preeti Y, Bharddwaj P, Goyal J. Reporting statistics in clinical trials published in Indian journals: A survey. Afr Health Sci 2010;10:204-7.  Back to cited text no. 6
    
7.
Jaykaran, Kantharia ND, Yadav P, Deoghare S. Reporting of the methodological quality and ethical aspects in clinical trials published in Indian journals: A survey. J Postgrad Med 2011;57:82-3.  Back to cited text no. 7
    
8.
Jaykaran, Saxena D, Yadav P, Kantharia ND. Negative studies published in medical journals of India do not give sufficient information regarding power/sample size calculation and confidence interval. J Postgrad Med 2011;57:176-7.  Back to cited text no. 8
    
9.
Breau RH, Carnat TA, Gaboury I. Inadequate statistical power of negative clinical trials in urological literature. J Urol 2006;176:263-6.  Back to cited text no. 9
    
10.
Linstone HA, Turoff M. The Delphi Method: Techniques and Applications. Reading, Massachusetts: Adison-Wesley; 1975.  Back to cited text no. 10
    
11.
Loscalzo J. Can scientific quality be quantified? Circulation 2011;123:947-50.  Back to cited text no. 11
[PUBMED]    
12.
Bakker M, Wicherts JM. The (mis) reporting of statistical results in psychology journals. Behav Res Methods 2011;43:666-78.  Back to cited text no. 12
    
13.
Charles P, Giraudeau B, Dechartres A, Baron G, Ravaud P. Reporting of sample size calculation in randomised controlled trials: Review. BMJ 2009;338:b1732.  Back to cited text no. 13
    
14.
Abdul Latif L, Daud Amadera JE, Pimentel D, Pimentel T, Fregni F. Sample size calculation in physical medicine and rehabilitation: A systematic review of reporting, characteristics, and results in randomized controlled trials. Arch Phys Med Rehabil 2011;92:306-15.  Back to cited text no. 14
    
15.
Littner Y, Mimouni FB, Dollberg S, Mandel D. Negative results and impact factor: A lesson from neonatology. Arch Pediatr Adolesc Med 2005;159:1036-7.  Back to cited text no. 15
    
16.
Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008;3:e3081.  Back to cited text no. 16
    
17.
Dirnagl U, Lauritzen M. Fighting publication bias: Introducing the negative results section. J Cereb Blood Flow Metab 2010;30:1263-4.  Back to cited text no. 17
[PUBMED]    
18.
Turner EH, Knoepflmacher D, Shapley L. Publication bias in antipsychotic trials: An analysis of efficacy comparing the published literature to the US food and drug administration database. PLoS Med 2012;9:e1001189.  Back to cited text no. 18
    
19.
Moher D, Dulberg CS, Wells GA. Statistical power, sample size, and their reporting in randomized controlled trials. JAMA 1994;272:122-4.  Back to cited text no. 19
    
20.
Dimick JB, Diener-West M, Lipsett PA. Negative results of randomized clinical trials published in the surgical literature: Equivalency or error? Arch Surg 2001;136:796-800.  Back to cited text no. 20
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1], [Table 2]



 

Top
Print this article  Email this article
 
Online since 12th February '04
2004 - Journal of Postgraduate Medicine
Official Publication of the Staff Society of the Seth GS Medical College and KEM Hospital, Mumbai, India
Published by Wolters Kluwer - Medknow