Reporting of various methodological and statistical parameters in negative studies published in prominent Indian Medical Journals: A systematic reviewJ Charan1, D Saxena2
1 Department of Pharmacology, GMERS Medical College, Patan, Gujarat, India
2 Department of Epidemiology, Indian Institute of Public Health, Gandhinagar, Gujarat, India
Correspondence Address: Source of Support: Indian Council of Medical Research (Grant No. 2012-01390),, Conflict of Interest: None DOI: 10.4103/0022-3859.143954
Source of Support: Indian Council of Medical Research (Grant No. 2012-01390),, Conflict of Interest: None
Objectives: Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Design: Systematic review. Materials and Methods: All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Results: Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Conclusion: Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.
Keywords: Negative studies, power calculation, sample size calculation, statistical tests
Studies that do not find a statistically significant difference in a variable of interest are known as negative studies. A negative study is likely to prevent further research in that area. Thus, published negative studies should be methodologically strong as poor methodology may harm by preventing further research.  Negative results between group analyses could result from lack of a true difference (true negative), inadequate sample size (underpowered), or the use of ineffective or insensitive tools to measure outcomes and hence the design becomes very important.  In addition, its publication should include all parameters relevant to design and analysis so that the reader can appraise methodological quality and hence the validity of results.  Inadequate reporting of methodological and statistical parameters in the articles published in international as well as Indian medical journals has already been reported. ,,, Of the few articles published regarding quality of reporting of negative studies, a majority reported on power and sample size alone. ,, The present study was designed to critically evaluate negative studies published in prominent Indian medical journals.
Site and time period and design
Department of Pharmacology, Govt. Medical College, Surat (Gujarat), India, from July 2012 to February 2013. A systematic review based on negative studies published in Science citation indexed medical journals.
A two-stage Delphi technique was utilized to develop a consensus on the tool to be used for data collection. This was undertaken amongst journal editors, peer reviewers, member of academia and researchers from Gujarat state and other part of India.  The first stage involved a training program cum workshop for discussion of various methodological issues related to negative studies and developing the tool for data collection. Consensus was built on inclusion criteria of journals and articles, criteria for labeling any study as a negative study and other parameters to be included in the tool. The workshop resulted in a first draft, which was resent to the members by email to develop a final consensus.
Choice of journals
The consensus reached included all original articles from India published in Science citation indexed (SCI) journals since the year 2000. The Journals which were finally included as per the eligibility criteria are mentioned in [Table 1]. Articles published in these journals between years 2000 and 2011 were downloaded and manually searched for eligibility. Only original articles were considered for evaluation; no short communications/brief reports or letter to editors were included.
Case definition of a negative study
This was defined as one where:
First author manually searched the original articles and downloaded negative studies based on above criteria. All selected articles were assessed for reporting of sample size, method for sample size calculation, methods for reducing bias (randomized/non-randomized), descriptive and inferential statistics, reporting of P value and confidence interval, power either in the design phase or a post hoc power analysis, reporting of adjustment of multiple endpoints and reporting of missing data and so on. Where applicable, parameters such as reporting of blinding, allocation concealment and analysis of missing data were also evaluated. We also decided to record/document methodological issues observed by the investigator during the analysis of articles (if such issues were found worthy of being mentioned).
Descriptive statistics (frequencies and percentages) were used. A comparison was made between journals on the basis of 'impact factor' more than 1 and less than 1 and between study designs by Fishers' exact test. Data was analyzed by Open Epi version 3.01.
A total of 15 journals were included of which 4/15 (Indian Journal of Medical Research, Indian Journal of Ophthalmology, Indian Pediatrics and the Journal of Postgraduate Medicine) had an impact factor greater than 1. A total of 7566 original articles were published between year 2000 and 2011 and 300/7566 (3.96%) were negative studies that were selected. [See [Figure 1] PRISMA Chart] A majority (279) were human studies, 19 animal and 2 pertained to microbiology. The latter were excluded from analysis. There were 55 clinical trials.
Reporting of methodology
Of the 279, power was mentioned in 33 (11.8%) studies and confidence interval in 44 (15.7%) studies. Method of calculation of sample size was reported in 37 (13.2%) studies. Methods for reducing bias (use of randomization) was reported in 142 (50.8%) studies. Of the 142, the exact process (random number table or computer generated) was mentioned in 62/142 (43.6%) studies. The nature of statistical tests used to analyze the endpoint was mentioned in 137 (49.1%) studies. P values were reported in 208 (74.5%) studies. Of these, exact P values were reported in 133 (63.9%) studies. In 72 (34.6%) articles cut off values were reported and in 3 articles, a P value was reported as "NS." Multiple point adjustment was reported in only three (1%) studies and post hoc power was reported in only six (2.1%) studies.
Sub group analysis of clinical trials
Of the 55 clinical trials, blinding was reported in 35 (63.6%) trials, and allocation concealment was mentioned in 18 (32.7%) trials. Six of the 55 trials had no missing data or drop outs. Of the remaining 49, "missing data analysis" was reported in 16 (32.6%). Intention to treat analysis was reported in 13 clinical trials, out of 16 which reported missing data analysis. There was no significant difference between reporting of trials between journals with impact factor less than 1 and more than 1.
Comments on power, sample size calculation, methods to reduce bias, reporting of exact process of sampling and reporting of P values were reported more frequently in clinical trials relative to observational studies [Table 2]. Journals with impact factor greater than 1 had better methodology reporting [Table 2].
The present study was designed with the objective to critically evaluate negative studies published in prominent medical journals of India. It was observed that these statistical and methodological parameters were reported poorly in the published negative studies. Reporting of a majority of parameters was more frequent in clinical trials as compared to other study designs and also in journals having impact factor more than one compared to journals with an impact factor less than 1. Similar reports have been published elsewhere.  When compared to a similar study by Herbert, et al. (2002), there appears to be lower reporting in Indian biomedical journals.  These results are also close to another study conducted by the authors.  The study done by Herbert, et al. was based on high impact factor journals hence such differences in observation are explainable. High impact journals have full time staff, high standard of editorial policies, statistical reviewers, high rejection rates and demanding peer review. , In the present study it was also observed that better reporting of methodology was seen in journals with an impact factor greater than one. On comparison with reporting of positive or superiority studies published in journals, frequency of parameters reported in this study is very less. , This difference between positive and negative studies may be because negative studies are more frequently published in lower impact factor journals than high impact factor journals. 
A negative study can do much harm by halting research. The issue of bias in publication of negative studies is now more important than ever as there is both encouragement and an obligation to report negative research. ,, There are several journals that are dedicated to publishing negative research and ensuring their reporting quality is as good as the quality of positive studies. The present study is limited by exclusions of brief reports and short communications (where reporting of methodology may be constrained by word count) as well as restriction to Indian journals only. Another limitation is the choice of SCI indexed journals only as inclusion of all Indian journals would have made the study impractical. In addition, the case definition of a negative study, in particular the third point was one made by the authors in the absence of a standard definition. ,, Also, it is possible that authors may have followed adequate methodology, but simply failed to report it. Findings of this study may help ensure better quality of reporting by Indian authors and Indian journals.
We would like to acknowledge Indian Council of Medical Research for providing us an extra mural grant (Grant no. 2012-01390) for conduction of this study, all the editors, academicians, researchers who helped us in the formation of the proforma. We would also like to thank Dr. Ambuj Kumar, Associate Professor, Centre for EBM, University of South Florida, for help in manuscript writing during the workshop of manuscript writing in Surat (Fogarty training grant - # 1D43TW006793-01A2-AITRP). We are thankful to unknown reviewers for constructive comments on our manuscript.
[Table 1], [Table 2]