Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & ISI's SCI  
Users online: 9477  
Home | Subscribe | Feedback | Login 
About Latest Articles Back-Issues Articlesmenu-bullet Search Instructions Online Submission Subscribe Etcetera Contact
 
  NAVIGATE Here 
  Search
 
  
 RESOURCE Links
 ::  Similar in PUBMED
 ::  Search Pubmed for
 ::  Search in Google Scholar for
 ::Related articles
 ::  Article in PDF (288 KB)
 ::  Citation Manager
 ::  Access Statistics
 ::  Reader Comments
 ::  Email Alert *
 ::  Add to My List *
* Registration required (free) 

  IN THIS Article
 ::  Abstract
 :: Introduction
 ::  The Significance...
 :: Small n
 :: Large n
 ::  The case for sma...
 ::  References

 Article Access Statistics
    Viewed4170    
    Printed78    
    Emailed0    
    PDF Downloaded17    
    Comments [Add]    
    Cited by others 5    

Recommend this journal


 


 
  Table of Contents     
EDUCATION FORUM
Year : 2021  |  Volume : 67  |  Issue : 4  |  Page : 219-223

The importance of small samples in medical research


Clinical Research Department, Max Healthcare, New Delhi, India

Date of Submission12-Mar-2021
Date of Decision11-May-2021
Date of Acceptance07-Aug-2021
Date of Web Publication26-Nov-2021

Correspondence Address:
A Indrayan
Clinical Research Department, Max Healthcare, New Delhi
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jpgm.JPGM_230_21

Rights and Permissions


 :: Abstract 


Almost all bio-statisticians and medical researchers believe that a large sample is always helpful in providing more reliable results. Whereas this is true for some specific cases, a large sample may not be helpful in more situations than we contemplate because of the higher possibility of errors and reduced validity. Many medical breakthroughs have occurred with self-experimentation and single experiments. Studies, particularly analytical studies, may provide more truthful results with a small sample because intensive efforts can be made to control all the confounders, wherever they operate, and sophisticated equipment can be used to obtain more accurate data. A large sample may be required only for the studies with highly variable outcomes, where an estimate of the effect size with high precision is required, or when the effect size to be detected is small. This communication underscores the importance of small samples in reaching a valid conclusion in certain situations and describes the situations where a large sample is not only unnecessary but may even compromise the validity by not being able to exercise full care in the assessments. What sample size is small depends on the context.


Keywords: Medical research, n = 1, self-experiments, small sample


How to cite this article:
Indrayan A, Mishra A. The importance of small samples in medical research. J Postgrad Med 2021;67:219-23

How to cite this URL:
Indrayan A, Mishra A. The importance of small samples in medical research. J Postgrad Med [serial online] 2021 [cited 2023 Jun 9];67:219-23. Available from: https://www.jpgmonline.com/text.asp?2021/67/4/219/331272





 :: Introduction Top


Statisticians, particularly those assisting medical research, are infamous for insisting on a large sample. “The larger the sample, the more reliable is the result” is their dictum. The recent examples are phase-III vaccine trials for coronavirus disease-19 where each company has conducted trials on thousands of people for assessing the efficacy of the vaccine and the incidence of the side effects. We explain later why such a large sample is required in this case, but there are several other studies on unnecessary huge samples. For example, Schnitzer et al.[1] conducted a study on 47,935 patients with osteoarthritis and 10,639 patients with rheumatoid arthritis to compare the prescription rate of rofecoxib and celecoxib. A trivial difference with these big samples is almost certain to be statistically significant as rightly mentioned by the authors. The sample size was not statistically determined but was based on the cases available in the large database of pharmacy claims in the US. Krishna et al.[2] studied the records of 782,320, 1,393,570, and 1,049,868 patients with allergic rhino-conjunctivitis, atopic eczema, and asthma, respectively, and twice as many controls, to find the relative risk of allergic diseases in such cases. This also is based on a retrospective cohort extracted from the UK primary care database with no justification of the sample size. Among the clinical trials, the effect of tranexamic acid on the mortality of different types of trauma patients was studied with a sample of 10,060 in the treatment arm and 10,067 in the control arm.[3] This study included 274 hospitals in 40 countries and no justification for the sample size is provided. There is an inclination to move to mega trials which would be based on huge samples. Thus, a large sample is used not just for retrospective data but also for prospective trials.

The above-mentioned examples show that a study is sometimes done on a large sample ignoring the statistical considerations of the desired precision and confidence level in the case of estimation and the minimum effect size to be detected and power in the case of testing of hypothesis situation. A large sample has become a lot easier in many cases these days because the data are available with individual institutions in an electronic form and these institutions form a consortium to achieve an impressively large sample, purportedly to increase the confidence in the results and to assert that their results have a high chance of being closer to the truth. This communication explains that this assertion could be false in some cases despite a very large sample, and studies on small samples can produce more truthful results in many cases because they can be carried out with more care. As described next, studies even with n = 1 can sometimes provide breakthrough findings. We also identify specific situations where a large sample may be required.


 :: The Significance of n = 1 Top


Scientists would agree that only one (n = 1) counter-example is enough to dismiss a theory. Such an example provides evidence that contrary to the existing knowledge 'can' happen. For example, there is no exception to the Pythagoras theorem. Medicine is not such a lucky science and a variation in agent, host, and environmental factors and their interaction can throw away any theory. Zhang et al.[4] has provided a counter-example to the conventional wisdom in the biomedical optics that longer wavelengths aid deeper imaging in the tissue, and Hughes et al.[5] presented a counter-example that in the center of the human ocular lens, there is no lipid turnover in the fiber cells during the entire human life span.

Howsoever paradoxical it may sound from the statistical viewpoint, many medical breakthroughs have occurred with a few or even a single observation (n = 1). Edward Jenner injected a boy with smallpox pus in 1796 that led to the vaccine and began immunology as a science.[6] The development of penicillin started from the single observation of Alexander Fleming, who noticed in 1928 that a mold had developed on a contaminated staphylococcus culture plate and concluded that possibly the culture prevented the growth of staphylococci, and could be effective against gram-positive bacteria.[7] He produced a filtrate of the mold cultures, named penicillin, which had a significant antibacterial effect and saved countless lives. A heart transplant by Christiaan Bernard in 1967[8] opened enormous possibilities. Only one instance of death soon after consuming a specific substance is generally considered enough to suspect that this substance can be poisonous, and one person developing a disease on contact with an affected person opens the possibility of it being contagious.

Many studies are based on self-experimentation. Nicholas Senn's experiment in 1901 of inserting a piece of a cancerous lymph node from a lip cancer patient and not developing the disease was a pointer to conclude that cancer is not microbial and not contagious.[9] William Harrington exchanged blood transfusion between himself and a thrombocytopenic patient in 1950 and thus discovered the immune basis of idiopathic thrombocytopenic purpura and provided evidence of the existence of auto-immunity.[10] Barry Marshall intentionally consumed H. pylori in 1984 and became ill. He took antibiotics and relieved his symptoms.[11] Thus, a cause-effect relationship was proposed based on just one observation. Sildenafil citrate (Viagra) was originally developed to treat cardiovascular problems, but Giles Brindley stunned the world in 1983 by dropping pants and showing an erected penis after injecting it with phenoxybenzamine in a urological conference. This proved that the erection mechanism of the penis is not in the heart but the penis.[12] An experiment on just one person was enough for the world to take a note that possibly a study on thousands of subjects would not have. Weisse identified 465 documented instances of self-experiment.[13] Many of these experiments paved the way for discoveries despite n = 1.

Convincing studies with n = 1 may be few and far between but they do provide evidence of the possible existence of an effect. They may not be enough to make a generalized statement for the entire target population, but they make a noticeable statement. All case studies are based on single patients and they are successful in highlighting the unusual occurrences that one must be aware of.

n–of–1 trials

In certain situations, a n-of-1 trial can be done where two or more treatment strategies are used on the same patient after proper randomization, blinding, and washout period where necessary. In this case, two or more regimens are tried on the same patient if the conditions allow. This does not require a big sample and can have just one patient. Such a trial can determine the optimal interaction for an individual patient and can be a good strategy for individualized medicine[14] although the generalization suffers. Nevertheless, a series of n-of-1 trials can provide a meaningful evidence base. Sedgwick[15] described an n-of-1 trial of release paracetamol and celecoxib for osteoarthritis. They had 41 patients completing the trial. Wood et al.[16] described an n-of-1 trial on 60 patients of statin, placebo, and no treatment to assess the side effects. Stunnenberg[17] has provided a practical flowchart for n-of-1 trials based on an ethical framework.


 :: Small n Top


Sauro and Lewis[18] considered n <20 small for the completion rate of a task but even a sample of 2,000 may be small for an extremely rare event (say, less than 1 in a 1,000) such as the incidence of epilepsy in the general population.[19] Statistically, a sample of n <30 for the quantitative outcome or [np or n (1 – p)] <8 (where P is the proportion) for the qualitative outcome is considered small because the central limit theorem for normal distribution does not hold in most cases with such a sample size and an exact method of analysis is required.[20] This means that for P = 0.001 (1 in 1,000), n must be at least 8,000 for using the usual normal distribution-based methods. However, this is only for the purpose of choosing the method of statistical analysis. For research, what is a small sample depends on the context and no hard-core definition can be given. The examples cited in this article illustrate what is small in different contexts.

Although multiple problems have been cited with the studies on a small sample,[21],[22],[23] many examples exist of useful studies on small samples. Some big discoveries have started with case series such as the dissemination of Kaposi sarcoma in young homosexuals[24] and pneumocystis pneumonia.[25] Most preclinical studies are done on a small sample of animals, particularly for regimens with a potentially harmful outcome such as insecticides. Animal experiments can be done in highly controlled conditions to nearly eliminate all the confounders, and thus, establish the cause-effect relationship without studying a big sample. This shows that the crucial requirement for analytical research is not the sample size but the control of all the cofounders. When they are under control, the variance decreases, and sufficient power is achieved with a smaller sample. Thus, a study with a small sample can provide more believable results than those on a large sample with uncontrolled confounders. Small samples have a tremendous advantage as highly sophisticated and accurate measurements can be made with all the precautions in place. The measurement errors and biases can be easily controlled and can be easily identified in a small sample. The aggregation errors that occur due to the combining of small and large values are less likely with small samples. Small samples give quick results, can be carried out in one center without the hassles of multicenter studies, and are easy to get the ethical committee approval. They may require exact methods of statistical analysis that can help in reaching more valid conclusions.

Among the clinical studies, phase-I trials are done on small samples where the objective is to test toxicity. In other setups, Hansen and Fulton[26] carried out a study on four children with a history of mild retinopathy of prematurity (ROP) and four controls and concluded that there is evidence of peripheral rod photoreceptor involvement in the subjects with ROP. Machado et al.[27] found severe acute respiratory syndrome coronavirus-2 viral ribonucleic acid (SARS-COV-2 viral RNA) in the semen of 1 out of 15 patients of this disease and considered it enough to alert about a possible new mode of transmission. Hatchell et al.[28] studied six or fewer patients undergoing different reconstructions and concluded that vascularized nerve grafts for facial nerve offer a practical and viable facial reconstruction surgery with acceptable donor site deficits. Most trials on surgical procedures are done on small samples because of the unavailability of many homogeneous cases, intra-operative variations, and the difficulty in obtaining patient consent for randomization for such trials.[29] A small sample has not impeded the progress of science in these disciplines.

No single study, whether based on a small sample or a large sample, is considered conclusive. A large number of small studies can be done easily in different setups, and if they point toward the same direction, a safe, possibly more robust, conclusion can be drawn through a meta-analysis. Alvares et al.[30] combined the results of 26 small studies, with a sample size of 8–30, by meta-analysis for assessing the effect of dietary nitrate on muscular strength. They found a trivial but statistically significant effect of dietary nitrate ingestion on the muscular strength with a combined sample of more than 500 subjects, although none of the individual studies reported any significant effect, possibly due to the small size of the sample.

Anderson and Vingrys[31] argued that small samples may be enough to show the presence of an effect but not for estimating the effect size. If the objective is only to show that an effect exists, bearing the cost of a large sample can be avoided. An unrealized advantage of small studies is that only a relatively large effect would be statistically significant, and this large effect may be medically significant too to change the current application. In addition, there is wide acceptance to the call to move beyond P < 0.05.[32] At the same time, detecting a small medically significant effect can be important in some cases and that brings in the question of the studies based on a large sample with adequate power.


 :: Large n Top


Most medical studies are carried out in less-than-ideal conditions primarily because ideal conditions in a medical setup simply do not exist in most situations. If there are many known and unknown confounders that can affect the outcome, a large sample is imperative to 'average out' their effect. This is tricky but is an underlying assumption in most medical studies although this requires a random sample. Large sample studies, including mega trials, are welcome if the data quality is assured. The second situation requiring a large sample is the need to have a high precision of the estimate of the effect size. We know that a large sample in this case handsomely improves the precision. However, the objective in most medical research is to be able to detect a medically significant effect (or not to miss an effect) when present and requires power calculations. The smaller the effect to be detected, the larger is the requirement of the sample. With the advancement of science, small improvement may have become medically important, and a large sample is required to detect a small improvement. A large sample may be required also to study a rare event, particularly if it is highly variable. A study on methicillin-resistant staphylococcus aureus (MRSA) positivity in general patients[33] is an example where a huge sample may be required. The identification of markers of Alzheimer's disease in its early phase[34] is another example where a large sample may be required because of wide variability. The efficacy of a vaccine is based on the difference in the incidence of the disease in the vaccinated and control groups—both these incidences may be small and the difference even smaller. Thus, a trial on a large sample is required. A large sample is also necessary to identify the rare side effects in this case. A large sample is also justified for multi-centric studies and for studies that investigate several outcomes.

At the same time, there are instances where an unnecessarily large sample was studied. We cited some examples earlier. Celik et al.[35] found that most randomized controlled trials (RCTs) on rheumatoid arthritis enroll more patients than needed. This is a needless exposure of patients to a regimen that is under trial.

Kaplan et al.[36] sounded a caution that big data could lead to big inferential errors and can magnify bias. This can happen due to the carelessness in collecting data or inadequate resources for a large study that could cause measurement errors, or due to unwittingly choosing a biased sample. They cite the example of the opinion polls before the election that rarely provide correct results despite a huge sample. Such surveys can rarely be done on a random or representative sample and the response received is not necessarily the same as actual voting. In medicine, this can happen with records-based studies and clinical trials when the sample is biased, or the quality of data is compromised. The investigator may not be aware that some impropriety has happened or may carelessly ignore it. The article by Munyangi et al.[37] had to be retracted due to questionable data, in addition to ethical issues, despite a large clinical trial. Discrepancies exist even among mega trials,[38] thus, large-scale trials too are not a guarantee of infallible results. Charlton[39] has discussed how typical mega trials recruit pathologically and prognostically heterogeneous subjects and lose validity. Mega trials generally require a multicenter approach where adopting a common protocol is difficult because of the preferences of individual centers.[40] Heterogeneity in a series of small trials may provide a significant advantage over a mega trial[41] when they point to the same conclusion.

On the other hand, are studies with inadequate samples that failed to detect medically significant improvement. Freiman[42] reexamined 71 negative trials and observed that 50 of these had more than a 10% chance of missing a 50% therapeutic improvement because of the small sample size, and Dimick[43] reported similar findings for surgical trials. Thus, a large sample may be required in certain situations.


 :: The case for small n Top


Concerns such as truthful research[44] and the effect of aleatory and epistemic uncertainties on the results[45] do not necessarily require a large sample. A big sample may be required in cases where the variability is high or the event under the study is rare and a precise estimate is required. That does not ensure validity as the large studies tend to use less care in obtaining high-quality data. A large sample may not be needed for comparative studies that aim to detect a specified effect if they are adequately planned to control the effect of all known and unknown confounders, wherever they operate, on the pattern of a laboratory setup except when a small effect is to be detected. Enrolling subjects could be expensive in many cases and can be avoided. The investigators should rather concentrate on the optimal design, accurate measurements, right analysis, and correct interpretation for the increased validity of the results, and not so much on the sample size. Validity is the key to truthful results. This approach may be more cost-effective in many situations. When a particular hypothesis is to be disproved or a potential effect is to be demonstrated, a small sample, even n = 1, maybe enough.

The present emphasis on large-scale studies is misplaced in many cases, particularly in analytical studies, where design and accurate data are more important. The journals should avoid giving high weight to studies on a large sample and the reviewers should rather focus on the design for the control of cofounders and good quality of data.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
 :: References Top

1.
Schnitzer TJ, Kong SX, Mitchell JH, Mavros P, Watson DJ, Pellissier JM, et al. An observational, retrospective, cohort study of dosing patterns for rofecoxib and celecoxib in the treatment of arthritis. Clin Ther 2003;25:3162-72.  Back to cited text no. 1
    
2.
Krishna MT, Subramanian A, Adderley NJ, Zemedikun DT, Gkoutos GV, Nirantharakumar K. Allergic diseases and long-term risk of autoimmune disorders: Longitudinal cohort study and cluster analysis. Eur Respir J 2019;54:1900476.  Back to cited text no. 2
    
3.
Roberts I, Shakur H, Coats T, Hunt B, Balogun E, Barnetson L, et al. The CRASH-2 trial: A randomised controlled trial and economic evaluation of the effects of tranexamic acid on death, vascular occlusive events and transfusion requirement in bleeding trauma patients. Health Technol Assess 2013;17:1-79.  Back to cited text no. 3
    
4.
Zhang T, Kho AM, Zawadzki RJ, Jonnal RS, Yiu G, Srinivasan VJ. Visible light OCT improves imaging through a highly scattering retinal pigment epithelial wall. Opt Lett 2020;45:5945-8.  Back to cited text no. 4
    
5.
Hughes JR, Levchenko VA, Blanksby SJ, Mitchell TW, Williams A, Truscott RJ. Correction: No turnover in lens lipids for the entire human lifespan. Elife 2015;4:e08186. Erratum for: Elife 2015;4.  Back to cited text no. 5
    
6.
Riedel S. Edward Jenner and the history of smallpox and vaccination. Proc (Bayl Univ Med Cent) 2005;18:21-5.  Back to cited text no. 6
    
7.
Fleming A. On the antibacterial action of cultures of a penicillium, with special reference to their use in the isolation of B. influenzae. 1929. Bull World Health Organ 2001;79:780-90.  Back to cited text no. 7
    
8.
Cooper DKC. Christiaan Barnard-The surgeon who dared: The story of the first human-to-human heart transplant. Glob Cardiol Sci Pract 2018;2018:11.  Back to cited text no. 8
    
9.
Olson JS. The History of Cancer : An Annotated Bibliography/Compiled by James S. Olson. New York: Greenwood Press; 1989. p. 153.  Back to cited text no. 9
    
10.
Harrington WJ, Minnich V, Hollingsworth JW, Moore CV. Demonstration of a thrombocytopenic factor in the blood of patients with thrombocytopenic purpura. 1951. J Lab Clin Med 1990;115:636-45.  Back to cited text no. 10
    
11.
Marshall B. The discovery that H. Pylori, a spiral bacterium, caused peptic ulcer disease. In: Helicobacter Pioneers; Firsthand Accounts from the Scientists who Discovered Helicobacters, 1892-1982. Oxford Blackwell; 2002. p. 165-202.  Back to cited text no. 11
    
12.
Morgentaler A. Why Men Fake It: The Totally Unexpected Truth About Men and Sex. Henry Holt and Co.; 2013. p. 22.  Back to cited text no. 12
    
13.
Weisse AB. Self-experimentation and its role in medical research. Tex Heart Inst J 2012;39:51-4.  Back to cited text no. 13
    
14.
Lillie EO, Patay B, Diamant J, Issell B, Topol EJ, Schork NJ. The n-of-1 clinical trial: The ultimate strategy for individualizing medicine? Per Med 2011;8:161-73.  Back to cited text no. 14
    
15.
Sedgwick P. What is an “n-of-1” trial? BMJ 2014;348:g2674.  Back to cited text no. 15
    
16.
Wood FA, Howard JP, Finegold JA, Nowbar AN, Thompson DM, Arnold AD, et al. N-of-1 trial of a statin, placebo, or no treatment to assess side effects. N Engl J Med 2020;383:2182-4.  Back to cited text no. 16
    
17.
Stunnenberg BC, Deinum J, Nijenhuis T, Huysmans F, van der Wilt GJ, van Engelen BG, et al. N-of-1 trials: Evidence-based clinical care or medical research that requires IRB approval? A practical flowchart based on an ethical framework. Healthcare (Basel) 2020;8:49.  Back to cited text no. 17
    
18.
Sauro J, Lewis JR. Small sample size: How precise are our estimates? In: Quantifying the User Experience. 2nd ed. Elsevier; 2016. Available from: https://www.sciencedirect.com/topics/computer-science/small-sample-size. [Last accessed on 2021 May 12].  Back to cited text no. 18
    
19.
Amudhan S, Gururaj G, Satishchandra P. Epilepsy in India I: Epidemiology and public health. Ann Indian Acad Neurol 2015;18:263-77.  Back to cited text no. 19
[PUBMED]  [Full text]  
20.
Indrayan A, Malhotra RK. Medical Biostatistics. 4th ed. CRC Press; 2018. p. 357.  Back to cited text no. 20
    
21.
Jain S, Jain S, Kuriakose M. Small sample size limits the usefulness of anterior open bite study. Am J Orthod Dentofacial Orthop 2021;159:e203.  Back to cited text no. 21
    
22.
Konietschke F, Schwab K, Pauly M. Small sample sizes: A big data problem in high-dimensional data analysis. Stat Methods Med Res2021;30:687-701.  Back to cited text no. 22
    
23.
Kapur S, Munafò M. Small sample sizes and a false economy for psychiatric clinical trials. JAMA Psychiatry 2019;76:676-7.  Back to cited text no. 23
    
24.
Gottlieb GJ, Ragaz A, Vogel JV, Friedman-Kien A, Rywlin AM, Weiner EA, et al. A preliminary communication on extensively disseminated Kaposi sarcoma in young homosexual men. Am J Dermatopathol 1981;3:111–4.  Back to cited text no. 24
    
25.
Centers for Disease Control and Prevention (CDC). Pneumocystis pneumonia--Los Angeles. 1981. MMWR Morb Mortal Wkly Rep 1996;45:729-33.  Back to cited text no. 25
    
26.
Hansen RM, Fulton AB. Background adaptation in children with a history of mild retinopathy of prematurity. Invest Ophthalmol Vis Sci 2000;41:320-4.  Back to cited text no. 26
    
27.
Machado B, Barcelos Barra G, Scherzer N, Massey J, Dos Santos Luz H, Henrique Jacomo R, et al. Presence of SARS-CoV-2 RNA in semen – cohort study in the United States COVID-19 positive patients. Infect Dis Rep 2021;13:96-101.  Back to cited text no. 27
    
28.
Hatchell AC, Chandarana SP, Matthews JL, McKenzie CD, Matthews TW, Hart RD, et al. Evaluating CNVII recovery after reconstruction with vascularized nerve grafts: A retrospective case series. Plast Reconstr Surg Glob Open 2021;9:e3374.  Back to cited text no. 28
    
29.
Ferreira LM. Surgical randomized controlled trials: Reflection of the difficulties. Acta Cir Bras 2004;19(Suppl 1):2-3.  Back to cited text no. 29
    
30.
Alvares TS, Oliveira GV, Volino-Souza M, Conte-Junior CA, Murias JM. Effect of dietary nitrate ingestion on muscular performance: A systematic review and meta-analysis of randomized controlled trials. Crit Rev Food Sci Nutr 2021:1-23. (in press)  Back to cited text no. 30
    
31.
Anderson AJ, Vingrys AJ. Small samples: Does size matter? Invest Ophthalmol Vis Sci 2001;42:1411-3.  Back to cited text no. 31
    
32.
Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond “p<0.05”. Am Stat 2019;73(Suppli 1):1–19.  Back to cited text no. 32
    
33.
Gieffers J, Ahuja A, Giemulla R. Long term observation of MRSA prevalence in a German rehabilitation center: Risk factors and variability of colonization rate. GMS Hyg Infect Control 2016;11:Doc21.  Back to cited text no. 33
    
34.
Xiong C, van Belle G, Chen K, Tian L, Luo J, Gao F, et al. Combining multiple markers to improve the longitudinal rate of progression – Application to clinical trials on the early stage of Alzheimer's disease. Stat Biopharm Res 2013;5:10.1080/19466315. 2012. 756662.  Back to cited text no. 34
    
35.
Celik S, Yazici Y, Yazici H. Are sample sizes of randomized clinical trials in rheumatoid arthritis too large? Eur J Clin Invest 2014;44:1034-44.  Back to cited text no. 35
    
36.
Kaplan RM, Chambers DA, Glasgow RE. Big data and large sample size: A cautionary note on the potential for bias. Clin Transl Sci 2014;7:342-6.  Back to cited text no. 36
    
37.
Munyangi J, Cornet-Vernet L, Idumbo M, Lu C, Lutgen P, Perronne C, et al. Effect of Artemisia annua and Artemisia afra tea infusions on schistosomiasis in a large clinical trial. Phytomedicine 2018;51:233-40.  Back to cited text no. 37
    
38.
Furukawa TA, Streiner DL, Hori S. Discrepancies among megatrials. J Clin Epidemiol 2000;53:1193-9.  Back to cited text no. 38
    
39.
Charlton BG. Fundamental deficiencies in the megatrial methodology. Curr Control Trials Cardiovasc Med 2001;2:2-7.  Back to cited text no. 39
    
40.
Pawlik TM, Sosa JA, editors. Success in Academic Surgery: Clinical Trials. Springer; 2014.  Back to cited text no. 40
    
41.
Shrier I, Platt RW, Steele RJ. Mega-trials vs. meta-analysis: Precision vs. heterogeneity? Contemp Clin Trials 2007;28:324-8.  Back to cited text no. 41
    
42.
Freiman JA, Chalmers TC, Smith H Jr, Kuebler RR. The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial. Survey of 71 “negative” trials. N Engl J Med 1978;299:690-4.  Back to cited text no. 42
    
43.
Dimick JB, Diener-West M, Lipsett PA. Negative results of randomized clinical trials published in the surgical literature: Equivalency or error? Arch Surg 2001;136:796-800.  Back to cited text no. 43
    
44.
Ioannidis JP. How to make more published research true. PLoS Med 2014;11:e1001747.  Back to cited text no. 44
    
45.
Indrayan A. Aleatory and epistemic uncertainties can completely derail medical research results. J Postgrad Med 2020;66:94-8.  Back to cited text no. 45
[PUBMED]  [Full text]  



This article has been cited by
1 Does Digital Rubbing Massage-Pain Relief (DRM Pain Relief) Affect Breast Cancer Patients’ Pain, Comfort, and Quality of Life?
Sudirman, Nina Indriyawati, Sri Utami Dwiningsih
SAGE Open Nursing. 2023; 9: 2377960823
[Pubmed] | [DOI]
2 Knee Arthrodesis with a Modular Silver-Coated Endoprosthesis for Infected Total Knee Arthroplasty with Extensive Bone Loss: A Retrospective Case-Series Study
Olga D. Savvidou, Angelos Kaspiris, Stavros Goumenos, Ioannis Trikoupis, Dimitra Melissaridou, Athanasios Kalogeropoulos, Dimitris Serenidis, Jim-Dimitris Georgoulis, Ioanna Lianou, Panagiotis Koulouvaris, Panayiotis J. Papagelopoulos
Journal of Clinical Medicine. 2023; 12(10): 3600
[Pubmed] | [DOI]
3 A Bayesian Sample Size Estimation Procedure Based on a B-Splines Semiparametric Elicitation Method
Danila Azzolina, Paola Berchialla, Silvia Bressan, Liviana Da Dalt, Dario Gregori, Ileana Baldi
International Journal of Environmental Research and Public Health. 2022; 19(21): 14245
[Pubmed] | [DOI]
4 Small Study Effects in Diagnostic Imaging Accuracy
Lucy Lu, Qi Sheng Phua, Stephen Bacchi, Rudy Goh, Aashray K. Gupta, Joshua G. Kovoor, Christopher D. Ovenden, Minh-Son To
JAMA Network Open. 2022; 5(8): e2228776
[Pubmed] | [DOI]
5 Undeclared activities on digital labour platforms: an exploratory study
Mara Ma?cu, Adriana Zai?, Rodica Ianole-Calin, Ioana Alexandra Horodnic
International Journal of Sociology and Social Policy. 2022;
[Pubmed] | [DOI]



 

Top
Print this article  Email this article
 
Online since 12th February '04
© 2004 - Journal of Postgraduate Medicine
Official Publication of the Staff Society of the Seth GS Medical College and KEM Hospital, Mumbai, India
Published by Wolters Kluwer - Medknow