Subscribe to RSS
DOI: 10.1055/s-0036-1581111
Publication Productivity for Academic Ophthalmologists and Academic Ophthalmology Departments in the United States: an Analytical Report
Address for correspondence
Publication History
02 November 2015
29 January 2016
Publication Date:
31 March 2016 (online)
Abstract
Purpose Quantifying scholarly output for academic ophthalmologists and academic ophthalmology departments provides a benchmark for academic productivity, offering information about how well an academic department facilitates the scholarly activity of its faculty. Bibliometrics is a statistical method to analyze scientific literature. Among benchmarking methods, the h-index has been the most widely accepted. The h-index samples a researcher's publication quantity while controlling for quality through citation count. The m-quotient adjusts the h-index according to the number of years since the first peer-reviewed publication, allowing for productivity assessments independent of career length. This study utilizes bibliometrics to create profiles for academic ophthalmology in the United States.
Methods Bibliometric profiles were created for 2,824 ophthalmologists from 110 nonmilitary departments. Profiles included the h-index and m-quotient calculated from an online citation database. Comparisons between academic rank, gender, region, and subspecialty were performed. Departments were ranked by the summation as well as the mean of h-indices for each faculty member.
Results The mean h-index and m-quotient were 10.56 ± 11.96 and 0.52 ± 0.44, respectively. Both of these values exhibited a positive relationship with increasing academic rank (p < 0.001). Faculty with subspecialties in ocular oncology, pathology, vitreoretinal disease, neuro-ophthalmology, and uveitis had higher mean h-indices than those in cornea and external disease, glaucoma, pediatrics, oculoplastics, anterior segment, and comprehensive ophthalmology. Males (n = 1,989) demonstrated a significantly higher mean h-index than females (n = 835), 12.12 ± 12.66 versus 6.84 ± 9.07. This difference was still significant after correcting for academic rank (p < 0.001). However, there was no significant difference in m-quotients between genders (p = 0.955). Ranked by summed h-indices, the top five programs for publication productivity in the United States in descending order were Massachusetts Eye and Ear Infirmary, University of Miami, Thomas Jefferson University, Johns Hopkins University, and the University of Wisconsin.
Conclusion This report benchmarks the publication productivity of academic ophthalmologists and academic ophthalmology departments in the United States. These results may serve program development in academic ophthalmology departments and prospective trainees and faculty.
#
Keywords
bibliometrics - h-index - m-quotient - academic ophthalmology - scholarly impact - departmental rank - ScopusBibliometrics is defined as the statistical and mathematical method used to quantitatively analyze scientific publications. Peer-reviewed publications are important for securing grant funding for academic ophthalmologists and their departments, career development and tenure/promotion for academic ophthalmologists, and judging the overall success of an academic department.[1] [2] [3] [4] [5] [6] However, determining the output and impact of research for an individual or a department is often highly subjective and unduly driven by reputation without quantitative analysis and data.
Measures of productivity and the impact of scientific publications have been established, but none have been more widely accepted than the h-index.[7] First introduced in 2005 by physicist J. E. Hirsch, the h-index is defined as an author's number of papers, h, that have been cited at least h times in peer-reviewed literature.[8] The h-index measures publication quantity while accounting for quality through citation count. Hirsch also described another parameter, the m-quotient, defined as the h-index divided by the number of years since the author's first publication.[8] The m-quotient is useful when comparing younger researchers to their more seasoned counterparts.
The h-index is the point where the number of publications intersects the number of citations when ranked by decreasing order of citations. To illustrate how the h-index works, let us compare two researchers in the same field, X and Y. Suppose X has 100 peer-reviewed articles and 10 of those have been cited 10 or more times in the literature. X's h-index would be 10. Y has published 50 articles, and 20 of those have been cited 20 times or more, therefore yielding an h-index of 20. Who has the larger influence in that scientific field? The h-index would argue that Y has a greater impact despite having half the number of publications, because Y's work was considered significant to subsequent studies as measured by citations.
Since its description, the h-index has been applied to analyze the fields of several medical and surgical specialties, including anesthesiology,[9] hepatology,[10] neurosurgery,[11] [12] [13] [14] [15] otolaryngology,[16] radiation oncology,[17] radiology,[18] surgery,[19] and urology.[20] Academic ophthalmology has utilized the h-index with recent publications concerning National Institutes of Health (NIH) funding,[21] gender differences within NIH funding,[22] and fellowship training.[23] The purpose of this study was to describe academic productivity within ophthalmology by measuring the h-index and m-quotient for 2,824 ophthalmologists and all nonmilitary departments (n = 110).
Methods
A listing of the 2014 ophthalmology residency-training programs was obtained from the Accreditation Council for Graduate Medical Education (http://www.acgme.org/ads/Public/Reports/ReportRun?ReportId=1&CurrentYear=2014&SpecialtyId=41&IncludePreAccreditation=true&IncludePreAccreditation=false). A total of 110 nonmilitary departments were identified. Departmental Web sites were consulted for the names, academic ranks, gender, and subspecialties of each faculty member. Full- and part-time residency-trained academic ophthalmologists were included. Nonophthalmologist faculty such as opticians, optometrists, non-MD PhDs, neurologists, and pathologists were excluded. If detailed information could not be obtained from the department's Web site, the department was contacted via email or telephone. With one exception, the subspecialties included in this study were those with fellowships listed by the San Francisco Ophthalmology Fellowship Match: anterior segment, cornea and external disease, glaucoma, neuro-ophthalmology, ophthalmic pathology, ophthalmic plastic surgery, pediatric ophthalmology, uveitis and ocular immunology, and vitreoretinal diseases. Ocular oncology was not listed by the San Francisco Ophthalmology Fellowship Match, but was included in our study.
Programs were grouped by region according the U.S. Census Bureau (http://www2.census.gov/geo/pdfs/maps-data/maps/reference/us_regdiv.pdf). These include the following: (1) northeast: CT, MA, ME, NH, NJ, NY, PA, RI, and VT; (2) midwest: IA, IL, IN, KS, MI, MN, MO, ND, NE, OH, SD, and WI; (3) south: AL, AR, DC, DE, FL, GA, KY, LA, MD, MS, NC, OK, SC, TN, TX, VA, and WV; and (4) west: AK, AZ, CA, CO, HI, ID, MT, NM, NW, OR, UT, WA, and WY.
The h-index is defined as an individual's number of papers, h, with at least h citations. The m-quotient is the h-index divided by the number of years since the author's first publication. H-indices and m-quotients were obtained from the citation database Scopus (Elsevier, http://www.scopus.com). Scopus has previously been used in this manner and has strong correlation to other citations databases such as Google Scholar and Thomson Reuters' Web of Science.[11] [15] Scopus also has unique identification capabilities, making it possible to cross-check departments and ophthalmological publications for further accuracy. In the event that a researcher was not easily identified in Scopus, efforts were made to identify him or her by analyzing all researchers of the same name. Data collection took place between May 2015 and June 2015, and calculations were completed in June 2015.
Statistical Analysis
The Kruskal–Wallis one-way analysis of variance (ANOVA) was used for comparison of continuous variables. The Mann–Whitney statistical test was used when describing comparisons between two groups. Academic rank was corrected for gender using a two-tailed ANOVA. Statistical significance threshold was set at p < 0.05, and mean values are presented with ± SD. All data were analyzed using SPSS software (IBM SPSS Statistics for Windows, Version 21.0. Armonk, NY).
#
#
Results
Bibliometric Profiles for Academic Ophthalmology
Data were obtained from 110 departmental Web sites, and included 2,824 academic ophthalmologists. The h-index and m-quotient were available for 2,491 (88.2%) of the 2,824 individuals. The mean h-index was 10.56 ± 11.96 with a median of 6 and a range from 0 to 127. The mean m-quotient was 0.52 ± 0.44 with a median of 0.40 and a range from 0 to 3.53. [Table 1] describes h-indices and m-quotients for academic rank, gender, fellowship training, subspecialties, and region. A total of 110 chairs, 607 professors, 494 associate professors, 960 assistant professors, and 265 instructors were reviewed. There was a significant increase in h-index and m-quotient with increasing academic rank (Kruskal–Wallis, p < 0.001) ([Figs. 1] and [2]).
Variable |
Frequency (%) |
H-index[a] |
p-Value |
M-quotient[a] |
p-Value |
---|---|---|---|---|---|
Overall |
2,491 (88%) |
10.56 ± 11.96 (6, [0–127]) |
– |
0.52 ± 0.44 (0.40, [0–3.53]) |
– |
Academic rank |
|||||
Chairman |
110 (5%) |
21.60 ± 13.7 (19, [0–74]) |
<0.001 |
0.78 ± 0.46 (0.72, [0–2.11]) |
<0.001 |
Professor |
596 (27%) |
21.2 ± 15.3 (19, [0–127]) |
0.72 ± 0.48 (0.62, [0–3.53]) |
||
Associate |
467 (21%) |
9.70 ± 7.36 (8, [0–69]) |
0.54 ± 0.41 (0.54, [0–2.38]) |
||
Assistant |
828 (38%) |
4.69 ± 4.82 (3, [0–45]) |
0.44 ± 0.38 (0.33, [0–2.83]) |
||
Instructor |
205 (9%) |
4.43 ± 5.47 (3, [0–41]) |
0.37 ± 0.36 (0.25, [0–2.0]) |
||
Gender |
|||||
Male |
1,757 (71%) |
12.12 ± 12.66 (8, [0–127]) |
<0.001[b] |
0.54 ± 0.44 (0.43, [0–3.53]) |
<0.995[b] |
Female |
734 (29%) |
6.84 ± 9.07 (4, [0–117]) |
0.48 ± 0.43 (0.35, [0–3.44]) |
||
Fellowship training |
|||||
More than one |
197 (8%) |
14.47 ± 11.76 (11, [0–61]) |
<0.001 |
0.69 ± 0.41 (0.61, [0–2.20]) |
<0.001 |
One |
2,011 (81%) |
10.96 ± 12.20 (7, [0–127]) |
0.54 ± 0.44 (0.43, [0–3.53]) |
||
None |
283 (11%) |
5.07 ± 7.92 (2.0, [0–56]) |
0.32 ± 0.41 (0.19, [0–3.44]) |
||
Subspecialty |
|||||
Ocular oncology |
20 (%) |
19.95 ± 22.12 (10, [0–75]) |
<0.001 |
0.87 ± 0.66 (0.70, [0–2.31]) |
<0.001 |
Pathology |
50 (%) |
16.90 ± 14.06 (13, [0–66]) |
0.62 ± 0.44 (0.51, [0–1.73]) |
||
Vitreoretinal |
595 (%) |
13.32 ± 13.90 (9, [0–127]) |
0.64 ± 0.49 (0.54, [0–3.53]) |
||
Neuro-ophthalmology |
126 (%) |
13.13 ± 11.63 (9.5, [0–46]) |
0.53 ± 0.37 (0.49, [0–1.66]) |
||
Uveitis |
45 (%) |
12.73 ± 13.58 (8, [0–51]) |
0.69 ± 0.46 (0.72, [0–1.75]) |
||
Cornea and external |
417 (%) |
10.87 ± 11.05 (7, [0–65]) |
0.54 ± 0.43 (0.44, [0–2.33]) |
||
Glaucoma |
366 (%) |
10.84 ± 13.25 (6, [0–117]) |
0.55 ± 0.46 (0.40, [0–3.00]) |
||
Pediatrics |
595 (%) |
9.02 ± 9.45 (5, [0–62]) |
0.48 ± 0.34 (0.38, [0–2.83]) |
||
Oculoplastics |
221 (%) |
7.91 ± 7.10 (6, [0–34]) |
0.44 ± 0.30 (0.39, [0–1.81]) |
||
Anterior segment |
41 (%) |
7.61 ± 9.90 (4, [0–38]) |
0.48 ± 0.47 (0.27, [0–2.14]) |
||
Comprehensive |
284 (%) |
5.05 ± 7.90 (2, [0–56]) |
0.32 ± 0.42 (0.19, [0–3.44]) |
||
Region |
|||||
West |
371 (15%) |
12.49 ± 12.50 (8, [0–74]) |
0.001 |
0.61 ± 0.45 (0.50, [0–3.0]) |
<0.001 |
Midwest |
616 (25%) |
11.23 ± 13.0 (7, [0–127]) |
0.52 ± 0.44 (0.43, [0–3.53]) |
||
Northeast |
730 (29%) |
9.92 ± 11.62 (5, [0–76]) |
0.50 ± 0.45 (0.37, [0–2.83]) |
||
South |
774 (31%) |
9.72 ± 10.98 (6, [0–84]) |
0.50 ± 0.42 (0.37, [0–2.45]) |
Note: All p-values represent Kruskal–Wallis comparisons among each subgroup.
a All results reported as mean ± SD (median, [range]).
b Statistical test used Mann–Whitney with two-way ANOVA correcting for academic rank.
There were 1,989 males (70.4%) and 835 females (29.6%) represented. The mean h-index for males was 12.12 ± 12.66 with a median of 8.0 and a range from 0 to 127, and differed significantly from the mean female h-index of 6.84 ± 9.07 with a median of 4 and a range from 0 to 117 (Mann–Whitney, p < 0.001). When corrected for academic rank, the differing h-indices remained significant (two-way ANOVA, p < 0.001) ([Fig. 3]). The mean m-quotient was 0.54 ± 0.44, with a median of 0.43 and a range from 0 to 3.53, and 0.48 ± 0.43, with a median of 3.5 and a range from 0 to 3.44 for males and females, respectively, and while this appears significantly different (Mann–Whitney, p = 0.001), after correcting for academic rank, the difference was no longer significant (two-way ANOVA, p = 0.955). Females held fewer positions in each academic rank. Specifically, 8 of the 110 chair and 92 of the 607 professors positions were held by females ([Table 2]).
Instructor |
Assistant |
Associate |
Professor |
Chairman |
|
---|---|---|---|---|---|
Male |
158 |
570 |
343 |
515 |
102 |
Female |
107 |
390 |
151 |
92 |
8 |
There was a significant increase in h-index and m-quotient among those with fellowship training (Kruskal–Wallis, p < 0.001) ([Table 1]). Those without fellowship training (n = 283) had a mean h-index of 5.07 ± 7.92 with a median of 2.0 and a range from 0 to 56 and a mean m-quotient of 0.32 ± 0.41 with a median of 0.19 and a range from 0 to 3.44. Ophthalmologists completing one fellowship (n = 2,011) had a mean h-index of 10.96 ± 12.20 with a median of 7 and a range from 0 to 127 and a mean m-quotient of 0.54 ± 0.44 with a median of 0.43 and a range from 0 to 3.53. Those completing more than one fellowship (n = 197) had a mean h-index of 14.47 ± 11.76 with a median of 11 and a range from 0 to 61, and a mean m-quotient of 0.69 ± 0.41 with a median of 0.61 and a range from 0 to 2.20. There was a significant difference between mean h-indices and m-quotients when comparing physicians with more than one fellowship to those completing only one fellowship (Mann–Whitney, p < 0.001).
Cornea and external disease and vitreoretinal disease had the largest number of faculty by subspecialty ([Fig. 4]). There were significantly different h-indices and m-quotients among the various subspecialties in academic ophthalmology (Kruskal–Wallis, p < 0.001). Ocular oncology had the highest h-index, followed by ocular pathology, vitreoretinal disease, and neuro-ophthalmology, uveitis and immunology, cornea and external disease, glaucoma, pediatrics, oculoplastics, anterior segment, and comprehensive ophthalmology ([Fig. 5]).
#
Departmental Rankings
The 110 departments were ranked based on the summation and the mean of h-indices within the department. The top five programs based on summed h-indices were the Massachusetts Eye and Ear Infirmary, University of Miami, Thomas Jefferson University, Johns Hopkins University, and the University of Wisconsin ([Table 3]). The top five programs based on summed m-quotients were the same except for the University of Michigan replacing the University of Wisconsin in rank position 5. The top five programs based on mean h-indices were the University of Wisconsin, University of California San Diego, Johns Hopkins University, Mayo Clinic, and the University of Iowa.
When comparing the mean h-index and m-quotient within a region to other regions in the United States, the west had the highest mean h-index and m-quotient of 12.45 ± 12.50, with a median of 8 and a range from 0 to 74, and 0.61 ± 0.45, with a median of 0.50 and a range from 0 to 3.0, respectively ([Table 1]). Both these parameters were significantly different when compared with other regions of the United States (Kruskal–Wallis, p = 0.001 and p < 0.001, respectively).
#
#
Discussion
Bibliometrics is a simple, yet powerful tool that can yield information about an individual or a department's scientific influence, which is an important measure of academic success. This approach has gained favor in several medical fields, and the h-index has been used in ophthalmology on a limited basis.[15] [23]
In February 2013, Svider et al compared h-indices among different surgical specialties and found that ophthalmology had lower h-indices than general surgery, neurosurgery, orthopedics, and urology, but higher h-indices than obstetrics and gynecology, otolaryngology, and plastic surgery. The study was conducted from a sample of 20 randomly selected departments and included 2,429 surgeons of the various fields.[24]
In 2014, Svider et al found that a higher h-index is strongly associated with NIH funding within ophthalmology[21] and also found a statistical difference in NIH funding when comparing genders.[22] Lopez et al demonstrated gender differences in a review of 1,460 academic ophthalmologists. They found that females are underrepresented at higher academic positions and have significantly lower productivity than males early in their careers. However, when comparing the publication productivity at the end of their careers, male and female scholarly output became equivalent, and females may have even surpassed their male counterparts.[25] Gender differences in our study were similar to those found by Lopez at al, with females making up to ∼30% of academic ophthalmology and being underrepresented in higher academic ranks.[25]
Huang et al found that fellowship training correlated with higher publication productivity in 1,440 ophthalmologists.[23] Our study found comparable results, and showed that the h-indices of physicians with multiple fellowships were higher than those with one fellowship. Comparing the fellowship training report by Huang et al, we found similar results, although the order differed slightly.[23] Of note, in our results, there was a trend for smaller volume fields such as ocular oncology, pathology, and uveitis and ocular immunology to have higher h-indices. This may represent the effect of a few extremely productive ophthalmologists in a relatively small pool.
As in several studies, we found productivity positively correlated with academic rank.[11] [15] [16] [20] [21] [23] [24] [25] There were significant differences between h-indices and m-quotients when comparing academic rank, supporting previous studies linking higher productivity to promotion and tenure.[1] [2]
Ranking departments was done to benchmark the publishing record of each department as has been done in other specialties.[15] By measuring the success of the individuals within a department, we can gauge the accomplishment of the department as a whole. Summing the h-indices of each member favored those departments with large volumes of faculty. As such, using the mean of the h-index may be a better benchmark for smaller departments. However, within smaller departments, the mean h-index can be unduly influenced by one or a few outliers, and thus, ranking by mean h-index may not accurately reflect the overall productivity or general support for scholarly activity within a department. Thus, we decided to include both the sum and the mean into the rank lists to allow for meaningful use of the analysis. Furthermore, while the h-index only shows how much activity a researcher has produced in a lifetime, the m-quotient helps illicit those producing consistent literature independent of age. Departments with a high m-quotient ranking could be viewed as supporting the newer academic ophthalmologist and encouraging those advanced in the field to maintain productivity.
This list can be used by deans and chairmen to evaluate their programs since those with higher scholarly impact may be more able to recruit and retain high-quality faculty and residents, as well as procure NIH funding.[3] [6] [21] Also, prospective faculty and trainees who desire a scholarly program can view this list for comparison purposes when making career choices.
Bibliometric studies assessing productivity in academic medicine are becoming more common. Future reports could be repeated in series to determine trends in academic activity and scholarly impact. Also, in future studies, departments could be evaluated for a specific subspecialty instead of the entire department itself. This information might be useful for residency and fellowship applicants.
As new metrics and reports are developed, caution must be warranted when interpreting those parameters. Hirsch himself stated, “… a single number can never give more than a rough approximation to an individual's multifaceted profile, and many other factors should be considered in combination in evaluating an individual.”[8] We look forward to future assessments utilizing robust bibliometric models, such as the h-index, fostering and improving scholarly activity in academic departments.
Limitations
While the h-index is the most recognized parameter, it is not without limitations. First, the size of the studied field greatly influences the h-index.[11] [15] Since ophthalmology is relatively small compared with, for example, general surgery, h-indices will tend to be lower for ophthalmology on the basis of its smaller pool of researchers and potential readership. Thus, caution must be used when applying the h-index to compare different specialties, as in the study previously mentioned.[24] This limitation also accounts for some of the differing h-indices in the ophthalmological subspecialties found in our results. For instance, the higher h-indices of ocular pathology and neuro-ophthalmology could be influenced by wider readership (beyond ophthalmology) of pathology and neurology journals. Differences between subspecialties may also be explained by the relative number of ophthalmologists within a subspecialty.
A major criticism of the h-index is that it can be falsely inflated by self-citation.[26] An author might self-cite papers to increase his or her h-index, especially in the beginning of a publishing career, because fewer citations are needed to increase the h-index. Engqvist and Frommen analyzed this problem in a study of 40 evolutionary biologists and ecologists by removing all self-citation counts, and they found the impact of self-citation to be minimal.[26]
Another drawback of the h-index is positively correlated with time spent publishing within a field secondary to the continually maturing citation count.[11] Additionally, some argue that it favors quantity over quality, as very highly cited papers are not adequately accounted for. Other parameters such as the g-index[27] and e-index[28] reward those publications that are very highly cited, something the h-index lacks. However, in prior studies analyzing the productivity of neurosurgery departments, the g-index and e-index were found to be multicollinear with the h-index, resulting in a similar ranking regardless of the citation metric.[15] [29]
Our methodology follows protocols utilized in several previous studies.[15] [21] [23] The database Scopus was chosen because of its unique identification and search capabilities.[15] For example, common names are difficult to identify, but Scopus assigns departments and specialties to each author, making analysis more accurate. Despite these advantages, our study is only as accurate as the accessible data. Data obtained from department Web sites and Scopus may produce erroneous results if outdated, and Scopus itself does not count citations prior to 1996. One might assume this limitation would affect all departments' analyses equally, but that assumption may not be true, and we have no way of determining the effect of this limitation. Our data collection period was 2 months and could have minimally influenced results, as those at the end of the collection period potentially had as much as two additional months to publish or acquire additional citations. Furthermore, as important as it is to review the most actively productive members of the field, benefits can also be gleaned from evaluating idleness. Individuals without h-indices (n = 333) were sought out. If found, their publications were then searched in Scopus to ensure internal validity. However, if the individuals could not be located in Scopus, they were considered to be inactive members of the community. These assumptions may have resulted in unintentional, unquantifiable errors in analysis.
#
#
Conclusion
This report includes detailed information about publication productivity in academic ophthalmology across academic rank, departmental rankings, gender, region, and subspecialty. This analysis can be used for comparing effectiveness in promoting scholarly activity among academic departments of ophthalmology. We hope that this information provides data that will guide program development and be useful to prospective or current trainees and faculty interested in scholarly productivity. Benchmarks generated by robust bibliometric profiling have the potential to drive improvements needed for the growth of scholarly output within academic departments and the advancement of ophthalmology.
#
#
Financial Support
This study was supported in part by an unrestricted grant by the Research to Prevent Blindness.
Note
This article was presented in part as a poster at the American Academy of Ophthalmology Annual Meeting, November 14 to 17, 2015.
-
References
- 1 Atasoylu AA, Wright SM, Beasley BW , et al. Promotion criteria for clinician-educators. J Gen Intern Med 2003; 18 (9) 711-716
- 2 Bligh J, Brice J. Further insights into the roles of the medical educator: the importance of scholarly management. Acad Med 2009; 84 (8) 1161-1165
- 3 Rezek I, McDonald RJ, Kallmes DF. Is the h-index predictive of greater NIH funding success among academic radiologists?. Acad Radiol 2011; 18 (11) 1337-1340
- 4 Svider PF, Mauro KM, Sanghvi S, Setzen M, Baredes S, Eloy JA. Is NIH funding predictive of greater research productivity and impact among academic otolaryngologists?. Laryngoscope 2013; 123 (1) 118-122
- 5 Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Acad Emerg Med 2014; 21 (10) 1160-1172
- 6 Venable GT, Khan NR, Taylor DR, Thompson CJ, Michael LM, Klimo Jr P. A correlation between National Institutes of Health funding and bibliometrics in neurosurgery. World Neurosurg 2014; 81 (3–4) 468-472
- 7 Ball P. Achievement index climbs the ranks. Nature 2007; 448 (7155) 737
- 8 Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A 2005; 102 (46) 16569-16572
- 9 Pagel PS, Hudetz JA. An analysis of scholarly productivity in United States academic anaesthesiologists by citation bibliometrics. Anaesthesia 2011; 66 (10) 873-878
- 10 Poynard T, Thabut D, Munteanu M, Ratziu V, Benhamou Y, Deckmyn O. Hirsch index and truth survival in clinical research. PLoS ONE 2010; 5 (8) e12044
- 11 Lee J, Kraus KL, Couldwell WT. Use of the h index in neurosurgery. Clinical article. J Neurosurg 2009; 111 (2) 387-392
- 12 Ponce FA, Lozano AM. Academic impact and rankings of American and Canadian neurosurgical departments as assessed using the h index. J Neurosurg 2010; 113 (3) 447-457
- 13 Spearman CM, Quigley MJ, Quigley MR, Wilberger JE. Survey of the h index for all of academic neurosurgery: another power-law phenomenon?. J Neurosurg 2010; 113 (5) 929-933
- 14 Aoun SG, Bendok BR, Rahme RJ, Dacey Jr RG, Batjer HH. Standardizing the evaluation of scientific and academic performance in neurosurgery—critical review of the “h” index and its variants. World Neurosurg 2013; 80 (5) e85-e90
- 15 Khan NR, Thompson CJ, Taylor DR , et al. An analysis of publication productivity for 1225 academic neurosurgeons and 99 departments in the United States. J Neurosurg 2014; 120 (3) 746-755
- 16 Svider PF, Choudhry ZA, Choudhry OJ, Baredes S, Liu JK, Eloy JA. The use of the h-index in academic otolaryngology. Laryngoscope 2013; 123 (1) 103-106
- 17 Quigley MR, Holliday EB, Fuller CD, Choi M, Thomas Jr CR. Distribution of the h-index in radiation oncology conforms to a variation of power law: implications for assessing academic productivity. J Cancer Educ 2012; 27 (3) 463-466
- 18 Bakkalbasi N, Bauer K, Glover J, Wang L. Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomed Digit Libr 2006; 3: 7
- 19 Turaga KK, Gamblin TC. Measuring the surgical academic output of an institution: the “institutional” H-index. J Surg Educ 2012; 69 (4) 499-503
- 20 Benway BM, Kalidas P, Cabello JM, Bhayani SB. Does citation analysis reveal association between h-index and academic rank in urology?. Urology 2009; 74 (1) 30-33
- 21 Svider PF, Lopez SA, Husain Q, Bhagat N, Eloy JA, Langer PD. The association between scholarly impact and National Institutes of Health funding in ophthalmology. Ophthalmology 2014; 121 (1) 423-428
- 22 Svider PF, D'Aguillo CM, White PE , et al. Gender differences in successful National Institutes of Health funding in ophthalmology. J Surg Educ 2014; 71 (5) 680-688
- 23 Huang G, Fang CH, Lopez SA, Bhagat N, Langer PD, Eloy JA. Impact of fellowship training on research productivity in academic ophthalmology. J Surg Educ 2015; 72 (3) 410-417
- 24 Svider PF, Pashkova AA, Choudhry Z , et al. Comparison of scholarly impact among surgical specialties: an examination of 2429 academic surgeons. Laryngoscope 2013; 123 (4) 884-889
- 25 Lopez SA, Svider PF, Misra P, Bhagat N, Langer PD, Eloy JA. Gender differences in promotion and scholarly impact: an analysis of 1460 academic ophthalmologists. J Surg Educ 2014; 71 (6) 851-859
- 26 Engqvist L, Frommen JG. The h-index and self-citations. Trends Ecol Evol 2008; 23 (5) 250-252
- 27 Egghe L. Theory and practise of the g-index. Scientometrics 2006; 69: 131-152
- 28 Zhang CT. The e-index, complementing the h-index for excess citations. PLoS ONE 2009; 4 (5) e5429
- 29 Taylor DR, Venable GT, Jones GM , et al. Five-year institutional bibliometric profiles for 103 US neurosurgical residency programs. J Neurosurg 2015; 123 (3) 547-560
Address for correspondence
-
References
- 1 Atasoylu AA, Wright SM, Beasley BW , et al. Promotion criteria for clinician-educators. J Gen Intern Med 2003; 18 (9) 711-716
- 2 Bligh J, Brice J. Further insights into the roles of the medical educator: the importance of scholarly management. Acad Med 2009; 84 (8) 1161-1165
- 3 Rezek I, McDonald RJ, Kallmes DF. Is the h-index predictive of greater NIH funding success among academic radiologists?. Acad Radiol 2011; 18 (11) 1337-1340
- 4 Svider PF, Mauro KM, Sanghvi S, Setzen M, Baredes S, Eloy JA. Is NIH funding predictive of greater research productivity and impact among academic otolaryngologists?. Laryngoscope 2013; 123 (1) 118-122
- 5 Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Acad Emerg Med 2014; 21 (10) 1160-1172
- 6 Venable GT, Khan NR, Taylor DR, Thompson CJ, Michael LM, Klimo Jr P. A correlation between National Institutes of Health funding and bibliometrics in neurosurgery. World Neurosurg 2014; 81 (3–4) 468-472
- 7 Ball P. Achievement index climbs the ranks. Nature 2007; 448 (7155) 737
- 8 Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A 2005; 102 (46) 16569-16572
- 9 Pagel PS, Hudetz JA. An analysis of scholarly productivity in United States academic anaesthesiologists by citation bibliometrics. Anaesthesia 2011; 66 (10) 873-878
- 10 Poynard T, Thabut D, Munteanu M, Ratziu V, Benhamou Y, Deckmyn O. Hirsch index and truth survival in clinical research. PLoS ONE 2010; 5 (8) e12044
- 11 Lee J, Kraus KL, Couldwell WT. Use of the h index in neurosurgery. Clinical article. J Neurosurg 2009; 111 (2) 387-392
- 12 Ponce FA, Lozano AM. Academic impact and rankings of American and Canadian neurosurgical departments as assessed using the h index. J Neurosurg 2010; 113 (3) 447-457
- 13 Spearman CM, Quigley MJ, Quigley MR, Wilberger JE. Survey of the h index for all of academic neurosurgery: another power-law phenomenon?. J Neurosurg 2010; 113 (5) 929-933
- 14 Aoun SG, Bendok BR, Rahme RJ, Dacey Jr RG, Batjer HH. Standardizing the evaluation of scientific and academic performance in neurosurgery—critical review of the “h” index and its variants. World Neurosurg 2013; 80 (5) e85-e90
- 15 Khan NR, Thompson CJ, Taylor DR , et al. An analysis of publication productivity for 1225 academic neurosurgeons and 99 departments in the United States. J Neurosurg 2014; 120 (3) 746-755
- 16 Svider PF, Choudhry ZA, Choudhry OJ, Baredes S, Liu JK, Eloy JA. The use of the h-index in academic otolaryngology. Laryngoscope 2013; 123 (1) 103-106
- 17 Quigley MR, Holliday EB, Fuller CD, Choi M, Thomas Jr CR. Distribution of the h-index in radiation oncology conforms to a variation of power law: implications for assessing academic productivity. J Cancer Educ 2012; 27 (3) 463-466
- 18 Bakkalbasi N, Bauer K, Glover J, Wang L. Three options for citation tracking: Google Scholar, Scopus and Web of Science. Biomed Digit Libr 2006; 3: 7
- 19 Turaga KK, Gamblin TC. Measuring the surgical academic output of an institution: the “institutional” H-index. J Surg Educ 2012; 69 (4) 499-503
- 20 Benway BM, Kalidas P, Cabello JM, Bhayani SB. Does citation analysis reveal association between h-index and academic rank in urology?. Urology 2009; 74 (1) 30-33
- 21 Svider PF, Lopez SA, Husain Q, Bhagat N, Eloy JA, Langer PD. The association between scholarly impact and National Institutes of Health funding in ophthalmology. Ophthalmology 2014; 121 (1) 423-428
- 22 Svider PF, D'Aguillo CM, White PE , et al. Gender differences in successful National Institutes of Health funding in ophthalmology. J Surg Educ 2014; 71 (5) 680-688
- 23 Huang G, Fang CH, Lopez SA, Bhagat N, Langer PD, Eloy JA. Impact of fellowship training on research productivity in academic ophthalmology. J Surg Educ 2015; 72 (3) 410-417
- 24 Svider PF, Pashkova AA, Choudhry Z , et al. Comparison of scholarly impact among surgical specialties: an examination of 2429 academic surgeons. Laryngoscope 2013; 123 (4) 884-889
- 25 Lopez SA, Svider PF, Misra P, Bhagat N, Langer PD, Eloy JA. Gender differences in promotion and scholarly impact: an analysis of 1460 academic ophthalmologists. J Surg Educ 2014; 71 (6) 851-859
- 26 Engqvist L, Frommen JG. The h-index and self-citations. Trends Ecol Evol 2008; 23 (5) 250-252
- 27 Egghe L. Theory and practise of the g-index. Scientometrics 2006; 69: 131-152
- 28 Zhang CT. The e-index, complementing the h-index for excess citations. PLoS ONE 2009; 4 (5) e5429
- 29 Taylor DR, Venable GT, Jones GM , et al. Five-year institutional bibliometric profiles for 103 US neurosurgical residency programs. J Neurosurg 2015; 123 (3) 547-560