Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & ISI's SCI  
Users online: 8910  
Home | Subscribe | Feedback | Login 
About Latest Articles Back-Issues Articlesmenu-bullet Search Instructions Online Submission Subscribe Etcetera Contact
 
  NAVIGATE Here 
  Search
 
  
 RESOURCE Links
 ::  Similar in PUBMED
 ::  Search Pubmed for
 ::  Search in Google Scholar for
 ::  Article in PDF (227 KB)
 ::  Citation Manager
 ::  Access Statistics
 ::  Reader Comments
 ::  Email Alert *
 ::  Add to My List *
* Registration required (free) 

  IN THIS Article
 ::  References

 Article Access Statistics
    Viewed3175    
    Printed202    
    Emailed0    
    PDF Downloaded18    
    Comments [Add]    

Recommend this journal


 


 
  Table of Contents     
EDITORIAL COMMENTARY
Year : 2019  |  Volume : 65  |  Issue : 2  |  Page : 70-71

Interpreting the meta-analysis of efficacy of vitamin D supplementation in major depression


Department of Biostatistics, Christian Medical College, Vellore - 632 002, Tamil Nadu, India

Date of Web Publication26-Apr-2019

Correspondence Address:
L Jeyaseelan
Department of Biostatistics, Christian Medical College, Vellore - 632 002, Tamil Nadu
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpgm.JPGM_267_18

Rights and Permissions




How to cite this article:
Jeyaseelan L. Interpreting the meta-analysis of efficacy of vitamin D supplementation in major depression. J Postgrad Med 2019;65:70-1

How to cite this URL:
Jeyaseelan L. Interpreting the meta-analysis of efficacy of vitamin D supplementation in major depression. J Postgrad Med [serial online] 2019 [cited 2023 Jun 10];65:70-1. Available from: https://www.jpgmonline.com/text.asp?2019/65/2/70/257284




Vellekkatt and Menon [1] have performed an interesting meta-analysis to summarize the efficacy of vitamin D supplementation in the treatment of major depression. This well-written article has included 43 clinical trial articles for systematic review, of which 4 finally qualified for further analysis. The conclusion from the meta-analysis (MA) was that the difference in depression symptom score was 0.58 units in vitamin D groups when compared with the control group. Of the four studies, one study dealt with 746 subjects out of the total 948 subjects, while the other studies dealt with roughly 200 subjects in total. Thus, one study result had dominated the results of the other studies. However, they all showed favorable result of vitamin D group over the control group.

The analysis of the effect sizes of those selected studies considered two models: fixed effect and random effects models. Under the fixed effects model, we assume that there is one true effect size and that all differences in observed effects between studies are due to sampling error. For example, multisite studies by pharmaceutical industry might like to find an average effect of few sites as they are using the same protocol. Therefore, they like to use fixed effects model. Thus, fixed effect implies that there is no sampling and the interest is restricted to selected studies only. On the other hand, random effects model allows the true effect to vary from study to study. For example, the effect size might be higher (or lower) in studies where the participants are older and so on.[2] Thus, random effects model is valid when the studies are random sample. In the present meta-analysis [1], the studies are not randomly selected, but they are a sample, and this so happens with almost any MA. So long as the intention is to extrapolate the results to the other similar studies, random effect model is considered better representation Schmidt et al.[3]

The heterogeneity of the effect size is an important statistic that quantifies if these studies can be pooled together. The heterogeneity statistics determines whether the studies in hand are of similar kind or not. The result of the pooled analysis may be sceptical if these studies are very much different from each other. The statistic that assesses the variability across studies is Q statistic. However, the I2 index, derived out of Q statistic quantifies the amount of variance on a relative scale. For example, if we are planning to speculate about reasons for variation, we should first use I2 to determine what proportion of the observed variance is real. If I2 is close to zero, then almost all the observed variance is spurious, which means that there is nothing to explain.[4]

In the present MA [1], there are three studies that look similar in terms of variability; however, the study by Wang et al.[5] has a narrow confidence interval (CI) as compared to other studies. Higgins et al.[6] has provided some tentative benchmarks for I2. The present MA has provided I2 as zero which suggests that there is no considerable between study variability.

A MA is expected to provide accurate summary of the studies included in the analysis. However, if these studies are a biased sample of all relevant studies, then MA would provide a biased result. It is a known fact that studies that report relatively high effect sizes are more likely to be published than studies that report lower or insignificant effect sizes. Thus, any bias in the literature is likely to be reflected in the MA. This is generally known as publication bias. It is important to recognize that publication bias, if any, is present. The Funnel plot is a simple graphical method to assess publication bias. The X-axis of the Funnel plot has the effect size, whereas the Y-axis represents the standard errors (SEs). The smaller SE implies large sample size, whereas larger SE implies small sample size studies. Small-scale studies are expected to be scattered around the bottom of the Funnel plot, whereas large-scale studies are expected to be placed at the top of the Funnel plot. Usually, there are two vertical lines displayed. One is at the null value (relative risk and odds ratio = 1, average effect size = 0) and another is at the average effect size. An asymmetric scatter plot of studies from the null value suggests publication bias. In other words, small-scale studies or insignificant studies would not have been included. Thus, in such as example there will be a hallow space below (left-hand side) the null value of 0.

The Funnel plot in the present MA needs a careful interpretation. The study by Wang et al.[5] had a large sample size and therefore had a smaller SE, and a narrower confidence interval (CI) (refer to Figure 2).[1] However, the Funnel plot showed great asymmetry suggesting a bias. Thus there are two view points to this Funnel plot. One is that the present MA dealt with publication bias or in reality there is no small-scale insignificant study at all. In the present MA, the presentation of Funnel plot is appreciated despite there being only four studies in toto.[3] From the flowchart, it is obvious that the authors have made considerable effort to search for published and unpublished studies.



 
 :: References Top

1.
Vellekkatt F, Menon V. Efficacy of vitamin D supplementation in major depression: A meta-analysis of randomized controlled trials. J Postgrad Med 2019;65:74-80.  Back to cited text no. 1
[PUBMED]  [Full text]  
2.
Borenstein M, Hedges LV, Higgins JPT, Rothstein, HR. Introduction to meta analyses. West Sussex: John Wiley & Sons; 2009.  Back to cited text no. 2
    
3.
Schmidt FL, Oh LS, Hayes TL. Fixed- versus random-effects models in meta-analysis: Model properties and an empirical comparison of differences in results. Br J Math Stat Psychol 2009;62:97-128.  Back to cited text no. 3
    
4.
Sterne JA, Egger M, Smith GD. Systematic reviews in healthcare: Investigating and dealing with publication and other biases in meta-analysis. BMJ 2001;323:101-5.  Back to cited text no. 4
    
5.
Wang Y, Liu Y, Lian Y, Li N, Liu H, Li G. Efficay of high-dose supplementation with oral vitamin D3 on depressive symptoms in dialysis patients with vitamin D3 insufficiency. A prospective, Randomized, double-blind study. J Clin Psychopharmacol 2016;39: 229-35.  Back to cited text no. 5
    
6.
Higgins J, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ 2003;327:557-60.  Back to cited text no. 6
    




 

Top
Print this article  Email this article
 
Online since 12th February '04
© 2004 - Journal of Postgraduate Medicine
Official Publication of the Staff Society of the Seth GS Medical College and KEM Hospital, Mumbai, India
Published by Wolters Kluwer - Medknow