Journal of Postgraduate Medicine
 Open access journal indexed with Index Medicus & EMBASE  
     Home | Subscribe | Feedback  

[Download PDF
Year : 2016  |  Volume : 62  |  Issue : 1  |  Page : 32-39  

Impact factor of medical education journals and recently developed indices: Can any of them support academic promotion criteria?

SA Azer1, A Holen2, I Wilson3, N Skokauskas4,  
1 Department of Medical Education, Curriculum and Research Unit, College of Medicine, King Saud University, Riyadh, Saudi Arabia
2 Faculty of Medicine, Norwegian University of Science and Technology (NTNU), St. Olav University Hospital, Trondheim, Norway
3 Graduate School of Medicine, University of Wollongong, New South Wales, Australia
4 Centre for Child and Adolescent Mental Health and Child Protection, Department of Neuroscience, Norwegian University of Science and Technology (NTNU), Trondheim, Norway

Correspondence Address:
S A Azer
Department of Medical Education, Curriculum and Research Unit, College of Medicine, King Saud University, Riyadh
Saudi Arabia


Journal Impact Factor (JIF) has been used in assessing scientific journals. Other indices, h- and g-indices and Article Influence Score (AIS), have been developed to overcome some limitations of JIF. The aims of this study were, first, to critically assess the use of JIF and other parameters related to medical education research, and second, to discuss the capacity of these indices in assessing research productivity as well as their utility in academic promotion. The JIF of 16 medical education journals from 2000 to 2011 was examined together with the research evidence about JIF in assessing research outcomes of medical educators. The findings were discussed in light of the nonnumerical criteria often used in academic promotion. In conclusion, JIF was not designed for assessing individual or group research performance, and it seems unsuitable for such purposes. Although the g- and h-indices have demonstrated promising outcomes, further developments are needed for their use as academic promotion criteria. For top academic positions, additional criteria could include leadership, evidence of international impact, and contributions to the advancement of knowledge with regard to medical education.

How to cite this article:
Azer S A, Holen A, Wilson I, Skokauskas N. Impact factor of medical education journals and recently developed indices: Can any of them support academic promotion criteria?.J Postgrad Med 2016;62:32-39

How to cite this URL:
Azer S A, Holen A, Wilson I, Skokauskas N. Impact factor of medical education journals and recently developed indices: Can any of them support academic promotion criteria?. J Postgrad Med [serial online] 2016 [cited 2021 Dec 1 ];62:32-39
Available from:

Full Text


Research is a fundamental aspect of academic life. It also represents an aspect of scholarship in medical education. Each month, approximately 60,000-65,000 new health-related research articles are published and indexed in the PubMed portal. [1] In most journals, however, the quality of the publications varies. Some papers are not clearly written, have poorly described methods, or use tools of low validity and reliability in spite of the Journal Impact Factor (JIF). [2] In academia, there is a need for introducing new indices to define the quality of research publications.

Academic departments, research centers, and funding bodies are increasingly interested in ways to assess academics' research production and the quality of individuals' research outcomes. In most universities, promotion and tenure systems reward individual achievements using general citation-based journal rankings. Although JIF is meant for journal rankings, several institutes let the ranking of journals where researchers published their work influence the academic career progression and the funding of grants. [3],[4],[5],[6]

Medical educators, like other academics, are under pressure to publish their work in top-ranking journals listed in the Science Citation Index (SCI), Social Sciences Citation Index (SSCI), and Journal Citation Reports (JCR). For the preceding year, the JIFs are published in the JCR each June. The JIF of a scientific journal is the ratio of the number of citations found for the two preceding years of articles published and divided by the number of citable items published in the same two years. [7],[8]

In a competitive research environment, alternative citation tracking allows researchers and universities to:

Identify the number of times a paper has been cited, andTrace the development of research concepts or ideas over time by tracking them backward and forward.

This would enable researchers to work on the quality of their research to match the standards required by top journals in their field. [9],[10],[11]

Several studies have examined journal rankings in journals of different disciplines including nursing, [12],[13] nutrition, [14] public health, [15] neurosurgery, [16] dermatology, [17] forensic science and toxicology, [3] psychology, [18] orthopedics, [19] radiation oncology, [20] and medical informatics. [21] For medical education, however, no studies have assessed the impact factor or discussed possible new tools for citation analysis. In the same vein, the h- and g-indices and the Article Influence Score (AIS) have not been studied in relation to medical education. [22],[23],[24]

The first part of this paper aims to review data sources and approaches for citation analysis. This knowledge is then applied to the assessment of 15 medical education journals to define highly regarded medical education titles by gathering data for each tool for these journals from Web of Science. We also aim to examine the strengths and limitations of using the JIF and other indices: h- and g-indices and the AIS.

The second part aims to assess whether any of these indices would add more evidence to support the policies and criteria of academic promotion and grant assessments and their current use in medical education.

 First Part: Assessing The JIF and Recently Developed Indices

Journal Impact Factor (JIF): A critical review

The JIF has emerged as a tool for ranking, evaluating, categorizing, and comparing scientific journals. [3],[8],[25] The Institute for Scientific Information (ISI), a component of Thomson Scientific, was behind this development.

A listing of journals' citations and their JIFs is made available by the ISI (Philadelphia, PA, USA), and it is also included in the JCR. It is important to note that the citation data of a single year and the citation data from only the two previous years' articles constitute a significant limitation of the JIFs. [26] Considering the fact that the average paper is not cited in the first year after publication, data gathered for 1-2 years post publication is likely to provide an unrepresentative low snapshot of the Impact Factor. However, other researchers have shown that the relative short-term citation impact measured in the window underlying the JIF is a good predictor of the citation impact of the journals in the years to come. [6]

Another criticism of JIF is related to its calculation. JIF depends on which article types Thomson Scientific deems "citable". Another limitation of the JIF is that the quality of the articles varies within a journal; the distribution of citations is skewed by only a few articles close to the population mean. [27],[28],[29],[30] Therefore, the publication of review articles (which usually acquire far more citations than research articles) or the publication of just a few very highly cited research papers can improve a journal's JIF. It has been shown that less than 20% of the articles published in a journal account for more than 50% of the total number of citations. Many articles are not cited at all, or they are cited because some readers disagree with the authors. [24],[31],[32] Accordingly, a single publication cannot be judged by the JIF. Added to this is the bias that may occur due to self-citations. [33] However, the JIF may be misused or abused by journals with the aim to improve their impact factor. For example:

The journal may publish a larger percentage of review articles, which generally attract more citations than research articles;The editor of a journal may set a submission policy that certain sections or articles be "by invitation only," with the aim to invite exclusively senior scientists in the field to submit their work and ensure that the published papers are citable;The journal may decline to publish articles such as "case reports" in medical journals because they are unlikely to attract citations;"Abstract" or "biography" may not be allowed for certain articles and hence such articles will not be counted by Thomson Scientific as citable items, but these articles may attract citations and contribute to the rise of the JIF; andThe editor may publish accepted papers early online, before they are published in paper format, by about 4-6 months.

More on recently developed indices

To resolve the problems related to self-citations, Eigenfactor TM Metrics ( was created by Carl Bergstrom, Jevin West, and Marc Wiseman at the Information School, University of Washington, Seattle, Washington, United States. [32],[33],[34],[35] The Eigenfactor Score is somewhat similar to a JIF but is corrected for the journal's self-citations. Therefore, references from one article in a journal to another published in the same journal are removed during the calculation of the Eigenfactor.

Google Scholar and Scopus

Google Scholar was launched in 2004 as a gateway to scholarly literature. [36] The database is readily available free of charge and shows the number of citations of and details about the journals citing each paper. However, the contents are not organized under subject headings. This makes it difficult to assess a researcher's publication outcomes. In addition, it shows a broader range of sources than JCR or Scopus, resulting in the inclusion of nonjournal sources. Scopus is an indexing database built by Elsevier Co. and launched in 2004. The database claims 4600 health sciences titles and shows 100% coverage of the databases MEDLINE/PubMed, Embase, and Compendex. More details about Scopus have been highlighted elsewhere. [12],[36],[37] However, neither Google Scholar nor Scopus have addressed the limitations of JIF.

The h-index

In 2005, JE Hirsch proposed the h-index to assess the impact of an individual author. [22],[23],[36] The h-index has been shown to be of no value in journal ranking. To determine the h-index of an author, papers are ranked in a decreasing order of their received citations; the h-index is the (unique) highest number of papers that received h or more citations. [22],[23] The h-index may have several advantages, as outlined in [Table 1]. However, the h-index is not sensitive enough to indicate changes even if the paper receives 5, 50, or 500 more citations: The index does not capture such changes in citations over time. [23],[38]{Table 1}

The g-index

Because of the limitations of the h-index and its insensitivity to highly cited articles, Egghe proposed the g-index. [23] The g-index is sensitive to the most cited articles. The g-index is defined as the highest number of papers that together received g2 or more citations. In other words, the higher the number of citations received for an article, the higher the g-index. [23]

To explain the differences between the h- and g-indices and the sensitivity of the latter to highly cited articles, let us look at two examples. Researcher A has published five articles with 5 citations. This researcher has an h-index of 5. Researcher B has published 5 papers; four of them attracted 5 citations each, and the remaining one attracted 15 citations. The h-index for researcher B is also 5, while the g-index will vary depending on the number of citations attracted by the best article he/she has published. If the citations attracted by the best article were 15, 25, or 50, the g-index would be 6, 7, and 9, respectively. Therefore, the g-index is more sensitive in assessing a researcher's productivity than the h-index and far more accurate than the JIF in assessing individual researchers.

The Article Influence Score (AIS)

This index calculates the relative importance of the journal on a per-article basis. The AIS is obtained by dividing the Eigenfactor Score by the number of articles published in the journal and normalized to make the overall AIS of all journals 1.0. It is roughly analogous to the 5-year JIF; it is the ratio of the journal's citation influence to the size of the journal's article contribution over a period of 5 years. [39] [Table 1] summarizes key information, strengths, and weaknesses of different metrics.

 Second Part: Academic Promotion in Medical Education and Citation Indices

Academic promotion

For staff promotion, the universities often count such parameters as:

1. Number of papers published in peer review journals; 2. Number of papers published in top-ranking journals [7] ; 3. Number of citations and cites per paper; 4. Other scholarly work such as the number of patents, the number of graduate students supervised, conference papers at national and international levels, research books, chapters of books, and monographs; and 5. The number of grants and research projects with the applicant as the principal researcher or associate investigator. [40]

Interestingly, there has been limited discussion in the literature about academic promotion, but extensive documentation on university webpages. The existing literature criticizes such bibliometrics in decision-making. Notably, this has resulted in a discussion concerning the academic nursing profession, [41] similar to that seen in medical education: The amount of research is limited, but there is also considerable diversity in the research methodology.

The wide use of JIF in academic appointments and promotions takes two forms: The "quality" of the journals in which the applicant is publishing and the "quality" of the papers as measured by the number of citations.

Citation indices and staff promotion

[Table 2] shows 16 highly regarded medical and allied health education journals with the JIF scores from 2000 to 2011. The total cites in 2011 under the category "Education, Scientific Discipline" were 42,997, and the Median Impact Factor was 0.902 for a total of 33 journals indexed under this category. Only 16 journals were selected for this study as the other journals covered other disciplines.{Table 2}

Interestingly, Advances in Health Sciences Education, which was indexed for the first time in 2003, has demonstrated progressive increases in its JIFs over the following years. Other journals, such as Teaching and Learning in Medicine, which was indexed in 2000, have failed to demonstrate significant improvement in its JIFs over these past years. The recently published journal Anatomical Sciences Education, however, was indexed for the first time in 2010, with a JIF of 2.976.

The largest increase was found for Academic Medicine and Medical Education, whose JIF scores increased from 1.554 and 1.078 in 2000 to 3.524 and 3.176 in 2011, respectively. Two other journals with noteworthy performance were Advances in Health Sciences Education and Advances in Physiology Education. Although Medical Teacher has shown progressive increases in its JIF scores over the years, the improvement in the JIF values has been small.

[Table 3] shows that 10 journals indexed in 2011 had 5-year JIF scores ranging from 3.189 (Medical Education) to 0.600 (Journal of Biological Education). The correlation between the 2-year JIF and 5-year JIF for these journals was high (r = 0.89, P < 0.001), which is consistent with other studies. [25] {Table 3}

[Table 4] summarizes additional information about medical and allied health journals indexed in the JCR. For each journal, the table shows the number of citable articles and citable reviews in 2011 for 15 journals (no information available on Anatomical Sciences Education) as well as the number of references and the ratio of references to total citable items (articles and reviews). The number of citable reviews varied widely.{Table 4}

From [Table 4] it appears that the mean number of references in the citable articles varied widely. It ranged from a low of 16.6 (Biochemistry and Molecular Biology Education) to a high of 35.6 (Medical Education).

[Table 5] shows the ranking of medical and allied health education journals and the AIS of each journal. As is the case with JIF, only a few manuscripts enhance this score, while most manuscripts have not acquired a sufficient number of citations.{Table 5}


In this paper, JIF has been analyzed and compared with later developments in the use of citations for the evaluation of research quality in general, and the journals addressing medical education have been explored in some depth.

The introduction of JIF in 1997 was a major milestone. Today, however, the limitations of JIF are clearly felt by many, [24],[31],[32],[34],[42] and there is a growing need for additional, more sophisticated tools in all stages of scientific endeavor to optimize future success in research funding and academic recruiting. The development of medical education is today ever more guided by research, [43],[44] but so far, no citation analysis of the JIF in comparison to the AIS, h-indices, and g-indices has been made. The ranking of medical education journals will probably fill an information gap within the health sciences. In this analysis, a number of well-regarded medical and allied health journals listed in JCR have been selected, analyzed, and compared.

From the analyses of the citation indices, the realization emerges with some strength that the current use of JIF does not serve the best of academic interests; an unjustifiable discrepancy between the journal ranking and the author ranking can be considerable. Moreover, there is a JIF bias in favor of publications within fields having a rapid turnover. JIF does not have the sensitivity and specificity to adequately meet the current needs and expectations for advances in the academic community across research fields.

Accordingly, when the funding of individual researchers or groups is to be decided or when making decisions about academic promotions, the use of the h- and g-indices together with the AIS is more likely to result in better assessments. The San Francisco Declaration on Research Association (DORA) recommends that JIF should not be used as a surrogate measure of the quality of an individual research article. [45]

Another important issue is the growing realization that JIFs are biased toward certain fields of research. For example, JIF is strongly in favor of high-profile disciplines with a rapidly cycled field of discoveries and turnover, such as molecular biology and biochemistry. This does injustice to low-profile disciplines such as health education, nursing, and midwifery. [46] The speed of turnover makes it difficult for medical educators to compete with colleagues from some other disciplines. It is also important to realize that the highest impact factors for journals covering medicine, biochemistry and molecular biology, biochemical research methods, and biology are 53.298, 34.317, 19.276, and 11.452, respectively, while the highest impact factor for medical education journals is only 3.524 (for Academic Medicine).

Furthermore, the numbers of journals in the area of medicine (general and internal), biochemistry and molecular biology, biochemical research methods, and biology indexed in the JCR are 155, 200, 72, and 85, while only 14 journals are indexed under medical education, and one for dentistry education, and another one for pharmacy and pharmaceutical education. This situation leaves limited opportunities for medical and allied health educators to publish their work in high-impact journals. As another example, consider that a medical educator publishes an article in Academic Medicine, a journal with a JIF of 3.524, and another colleague from the Department of Medicine at the same institute publishes in Annals of Medicine, a journal with a JIF of 3.516. Both journals have nearly the same JIF, but Academic Medicine is the top journal in medical education, while Annals of Medicine is ranked #19 in its own field. This major difference is totally ignored if only the JIF is considered in the academic assessment of research outcomes.

Nevertheless, better indices provide vital support in decision-making for research for funding, recruitment, and improved teaching in the competitive environment of academia. In certain ways, a change in the current use of citation indices will sharpen the competition in wholesome ways. More importantly, it is likely to enable better decisions and more fairness with regard to assessments of the publication output of individuals and research groups across disciplines and methodologies. In addition to these metrics, a battery of other indices should form the basis for academic promotion, particularly for top positions, including the following:

1. Invitations to speak internationally about research, 2. A sustained record of being the principal investigator in funded research, 3. Services as an editor and/or editorial board member of medical education journals and scientific journals, and years as peer reviewer to top international journals in the field, 4. Leadership roles on national and international committees of major medical education societies, and major conferences on medical education, 5. Prestigious national and international awards for research and innovations in medical education, 6. Leadership in international collaboration in research and publication as principal investigator, and 7. Leadership and accumulated achievements in specific areas in medical education.

Each of these indices could be standardized by a numerical system. For example, invitations as a keynote speaker may be evaluated by using the following scoring system: 0 = not invited, 1 = invited to speak in a meeting held within their own university, 2 = invited to speak at a national conference, 3 = invited to speak at an international university ranked lower than their own, 4 = invited to speak at an international university ranked higher than their own, 5 = invited to speak at a major international conference. Indices such as these could enhance assessment for academic promotion.


Given the need for tighter links between research quality and funding as well as recruitment practices, it is time to revise the scientific evaluations also within medical teaching; institutional decisions should preferably be evidence-based and favor individuals with solid scientific merit rather than be driven by coincidental or ideological motives. In the absence of better tools, rough approximations of scientific quality were derived from the JIF in the past. Although AIS and the g- and h-indices have shown promising outcomes, further developments are needed. Other key indices, particularly for top academic positions, should also be considered.

Financial support and sponsorship

This work was funded by the College of Medicine Research Center, Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia.

Conflicts of interest

The authors declare that they have no conflict of interest and that the whole manuscript has been created by the authors.


1Neylon C, Wu S. Article-level metrics and the evolution of scientific impact. PLoS Biol 2009;7:e1000242.
2Bradford SC. Documentations. 2 nd ed. London: Crosby Lockwood; 1953.
3Jones AW. Impact factors of forensic science and toxicology journals: What do the numbers really mean? Forensic Sci Int 2003;133:1-8.
4Burke D, Phillips LH 2 nd . Is the "impact factor" a valid measure of the impact of research published in Clinical Neurophysiology and Muscle & Nerve? Clin Neurophysiol 2012;123:1687-90.
5Wakefield R. Networks of accounting research: A criterion-based structural and network analysis. Br Account Rev 2008;40:228-44.
6van Leeuwen T. Discussing some basic critique on Journal Impact Factors: Revision of earlier comments. Scientometrics 2012;92: 443-55.
7Garfield E. All sorts of authorship. Nature 1997;389:777.
8Garfield E. Which medical journals have the greatest impact? Ann Intern Med 1986;105:313-20.
9Fritzsche FR, Oelrich B, Dietel M, Jung K, Kristiansen G. European and US publications in the 50 highest ranking pathology journals from 2000 to 2006. J Clin Pathol 2008;61:474-81.
10Halpenny D, Burke J, McNeill G, Snow A, Torreggiani WC. Geographic origin of publications in radiological journals as a function of GDP and percentage of GDP spent on research. Acad Radiol 2010;17:768-71.
11Nigam A, Nigam PK. Citation index and impact factor. Indian J Dermatol Venereol Leprol 2012;78:511-6.
12De Groote SL, Raszewski R. Coverage of google scholar, scopus, and web of science: A case study of the h-index in nursing. Nurs Outlook 2012;60:391-400.
13Johnstone MJ. Journal impact factors: Implications for the nursing profession. Int Nurs Rev 2007;54:35-40.
14Jani N, Keshteli AH, Kabiri P, Esmaillzadeh A. A 10-year performance trajectory of top nutrition journals′ impact factors. J Res Med Sci 2012;17:128-32.
15Derrick GE, Haynes A, Chapman S, Hall WD. The association between four citation metrics and peer rankings of research influence of Australian researchers in six fields of public health. PLoS One 2011;6:e18521.
16Ponce FA, Lozano AM. Academic impact and rankings of American and Canadian neurosurgical departments as assessed using the h index. J Neurosurg 2010;113:447-57.
17Dellavalle RP, Schilling LM, Rodriguez MA, Van de Sompel H, Bollen J. Refining dermatology journal impact factors using PageRank. J Am Acad Dermatol 2007;57:116-9.
18Cho KW, Tse CS, Neely JH. Citation rates for experimental psychology articles published between 1950 and 2004: Top-cited articles in behavioral cognitive psychology. Mem Cognit 2012;40:1132-61.
19Siebelt M, Siebelt T, Pilot P, Bloem RM, Bhandari M, Poolman RW. Citation analysis of orthopaedic literature; 18 major orthopaedic journals compared for Impact Factor and SCImago. BMC Musculoskelet Disord 2010;11:4.
20Choi M, Fuller CD, Thomas CR Jr. Estimation of citation-based scholarly activity among radiation oncology faculty at domestic residency-training institutions: 1996-2007. Int J Radiat Oncol Biol Phys 2009;74:172-8.
21Vishwanatham R. Citation analysis in journal rankings: Medical informatics in the library and information science literature. Bull Med Libr Assoc 1998;86:518-22.
22Hirsch JE. An index to quantify an individual′s scientific research output. Proc Natl Acad Sci U S A 2005;102:16569-72.
23Egghe L. Theory and practice of the g-index. Scientometrics 2006;69:131-52.
24Green JB. Limiting the impact of the impact factor. Science 2008;322:1463.
25Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90-3.
26Garfield E. The Use of JCR and JPI in Measuring Short and Long Term Journal Impact. Presented at Council of Scientific Editors Annual Meeting. Available from: [Last accessed on 2000 May 9].
27Franco G. Research evaluation and competition for academic positions in occupational medicine. Arch Environ Occup Health 2013;68:123-7.
28Weale AR, Bailey M, Lear PA. The level of non-citation of articles within a journal as a measure of quality: A comparison to the impact factor. BMC Med Res Methodol 2004;4:14.
29Moed HF, Van Leeuwen T, Reedijk J. A critical analysis of the journal impact factors of ′Angewandte Chemie′ and ′The Journal of the American Chemical Society′. Inaccuracies in published impact factors based on overall citations only. Scientometrics 1996;37:105-16.
30Moed HF. New developments in the use of citation analysis in research evaluation. Arch Immunol Ther Exp (Warsz) 2009;57:13-8.
31Rossner M, Van Epps H, Hill E. Show me the data. J Exp Med 2007;204:3052-3.
32Brumback RA. "3 . . 2 . . 1 . . Impact [factor]: Target [academic career] destroyed!": Just another statistical casualty. J Child Neurol 2012;27:1565-76.
33Kulkarni AV, Aziz B, Shams I, Busse JW. Author self-citation in the general medicine literature. PLoS One 2011;6:e20885.
34Tse H. A possible way out of the impact-factor game. Nature 2008;454:938-9.
35Bergstrom CT, West JD, Wiseman MA. The Eigenfactor metrics. J Neurosci 2008;28:11433-4.
36Bar-Ilan J. Which h-index? A comparison of WoS, Scopus and Google Scholar. Scientometrics 2008;74:257-71.
37Burnham JF. Scopus database: A review. Biomed Digit Libr 2006;3:1.
38Abbas AM. Bounds and inequalities relating h-index, g-index, e-index and generalized impact factor: An improvement over existing models. PLoS One 2012;7:e33699.
39Swaan PW. Science beyond impact factors. Pharm Res 2009;26: 743-5.
40Thomas PA, Diener-West M, Canto MI, Martin DR, Post WS, Streiff MB. Results of an academic promotion and career path survey of faculty at the Johns Hopkins University School of Medicine. Acad Med 2004;79:258-64.
41Smith KM, Crookes PA, Else F, Crookes E. Scholarship reconsidered: Implications for reward and recognition of academic staff in schools of nursing and beyond. J Nurs Manag 2012;20:144-51.
42Patel VM, Ashrafian H, Bornmann L, Mutz R, Makanjuola J, Skapinakis P, et al. Enhancing the h index for the objective assessment of healthcare researcher performance and impact. J R Soc Med 2013;106:19-29.
43Sangwal K. On the relationship between citations of publication output and Hirsch index h of authors: Conceptualization of tapered Hirsch index h(T), circular citation area radius R and citation acceleration a. Scientometrics 2012;93:987-1004.
44Reznik M, Ozuah PO. Trends in study designs in pediatric medical education research, 1992-2011. J Pediatr 2013;162:222-3.
45Cagan R. The San Francisco declaration on research assessment. Dis Model Mech 2013;6:869-70.
46Coleman R. Impact factors: Use and abuse in biomedical research. Anat Rec 1999;257:54-7.

Wednesday, December 1, 2021
 Site Map | Home | Contact Us | Feedback | Copyright  and disclaimer