Search (22 results, page 1 of 2)

  • × author_ss:"Bornmann, L."
  1. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.017047867 = product of:
      0.085239336 = sum of:
        0.085239336 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
          0.085239336 = score(doc=1239,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.46428138 = fieldWeight in 1239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=1239)
      0.2 = coord(1/5)
    
    Date
    18. 3.2014 19:13:22
  2. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.011365245 = product of:
      0.056826223 = sum of:
        0.056826223 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
          0.056826223 = score(doc=1431,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.30952093 = fieldWeight in 1431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1431)
      0.2 = coord(1/5)
    
    Date
    22. 8.2014 17:05:18
  3. Collins, H.; Bornmann, L.: On scientific misconduct (2014) 0.01
    0.009787264 = product of:
      0.04893632 = sum of:
        0.04893632 = weight(_text_:1 in 1247) [ClassicSimilarity], result of:
          0.04893632 = score(doc=1247,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.37997085 = fieldWeight in 1247, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.109375 = fieldNorm(doc=1247)
      0.2 = coord(1/5)
    
    Date
    1. 5.2014 18:21:46
  4. Bornmann, L.: Scientific peer review (2011) 0.01
    0.009787264 = product of:
      0.04893632 = sum of:
        0.04893632 = weight(_text_:1 in 1600) [ClassicSimilarity], result of:
          0.04893632 = score(doc=1600,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.37997085 = fieldWeight in 1600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.109375 = fieldNorm(doc=1600)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 45(2011) no.1, S.197-245
  5. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.01
    0.008523934 = product of:
      0.042619668 = sum of:
        0.042619668 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
          0.042619668 = score(doc=656,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.23214069 = fieldWeight in 656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=656)
      0.2 = coord(1/5)
    
    Date
    22. 3.2013 19:44:17
  6. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.01
    0.008523934 = product of:
      0.042619668 = sum of:
        0.042619668 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
          0.042619668 = score(doc=4681,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.23214069 = fieldWeight in 4681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
      0.2 = coord(1/5)
    
    Date
    8. 1.2019 18:22:45
  7. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.01
    0.007103278 = product of:
      0.03551639 = sum of:
        0.03551639 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
          0.03551639 = score(doc=4186,freq=2.0), product of:
            0.18359412 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052428056 = queryNorm
            0.19345059 = fieldWeight in 4186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
      0.2 = coord(1/5)
    
    Date
    22. 1.2011 12:51:07
  8. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.00
    0.004943315 = product of:
      0.024716575 = sum of:
        0.024716575 = weight(_text_:1 in 532) [ClassicSimilarity], result of:
          0.024716575 = score(doc=532,freq=4.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.19191428 = fieldWeight in 532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
      0.2 = coord(1/5)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.96-107
  9. Bornmann, L.; Moya Anegón, F.de: What proportion of excellent papers makes an institution one of the best worldwide? : Specifying thresholds for the interpretation of the results of the SCImago Institutions Ranking and the Leiden Ranking (2014) 0.00
    0.004943315 = product of:
      0.024716575 = sum of:
        0.024716575 = weight(_text_:1 in 1235) [ClassicSimilarity], result of:
          0.024716575 = score(doc=1235,freq=4.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.19191428 = fieldWeight in 1235, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1235)
      0.2 = coord(1/5)
    
    Abstract
    University rankings generally present users with the problem of placing the results given for an institution in context. Only a comparison with the performance of all other institutions makes it possible to say exactly where an institution stands. In order to interpret the results of the SCImago Institutions Ranking (based on Scopus data) and the Leiden Ranking (based on Web of Science data), in this study we offer thresholds with which it is possible to assess whether an institution belongs to the top 1%, top 5%, top 10%, top 25%, or top 50% of institutions in the world. The thresholds are based on the excellence rate or PPtop 10%. Both indicators measure the proportion of an institution's publications which belong to the 10% most frequently cited publications and are the most important indicators for measuring institutional impact. For example, while an institution must achieve a value of 24.63% in the Leiden Ranking 2013 to be considered one of the top 1% of institutions worldwide, the SCImago Institutions Ranking requires 30.2%.
  10. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.00
    0.004943315 = product of:
      0.024716575 = sum of:
        0.024716575 = weight(_text_:1 in 3231) [ClassicSimilarity], result of:
          0.024716575 = score(doc=3231,freq=4.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.19191428 = fieldWeight in 3231, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
      0.2 = coord(1/5)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
  11. Bornmann, L.; Marx, W.: Distributions instead of single numbers : percentiles and beam plots for the assessment of single researchers (2014) 0.00
    0.004893632 = product of:
      0.02446816 = sum of:
        0.02446816 = weight(_text_:1 in 1190) [ClassicSimilarity], result of:
          0.02446816 = score(doc=1190,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.18998542 = fieldWeight in 1190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1190)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.1, S.206-208
  12. Bornmann, L.: Lässt sich die Qualität von Forschung messen? (2013) 0.00
    0.004194542 = product of:
      0.020972708 = sum of:
        0.020972708 = weight(_text_:1 in 928) [ClassicSimilarity], result of:
          0.020972708 = score(doc=928,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.16284466 = fieldWeight in 928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.046875 = fieldNorm(doc=928)
      0.2 = coord(1/5)
    
    Abstract
    Grundsätzlich können wir bei Bewertungen in der Wissenschaft zwischen einer 'qualitative' Form, der Bewertung einer wissenschaftlichen Arbeit (z. B. eines Manuskripts oder Forschungsantrags) durch kompetente Peers, und einer 'quantitative' Form, der Bewertung von wissenschaftlicher Arbeit anhand bibliometrischer Indikatoren unterscheiden. Beide Formen der Bewertung sind nicht unumstritten. Die Kritiker des Peer Review sehen vor allem zwei Schwächen des Verfahrens: (1) Verschiedene Gutachter würden kaum in der Bewertung ein und derselben wissenschaftlichen Arbeit übereinstimmen. (2) Gutachterliche Empfehlungen würden systematische Urteilsverzerrungen aufweisen. Gegen die Verwendung von Zitierhäufigkeiten als Indikator für die Qualität einer wissenschaftlichen Arbeit wird seit Jahren eine Vielzahl von Bedenken geäußert. Zitierhäufigkeiten seien keine 'objektiven' Messungen von wissenschaftlicher Qualität, sondern ein kritisierbares Messkonstrukt. So wird unter anderem kritisiert, dass wissenschaftliche Qualität ein komplexes Phänomen darstelle, das nicht auf einer eindimensionalen Skala (d. h. anhand von Zitierhäufigkeiten) gemessen werden könne. Es werden empirische Ergebnisse zur Reliabilität und Fairness des Peer Review Verfahrens sowie Forschungsergebnisse zur Güte von Zitierhäufigkeiten als Indikator für wissenschaftliche Qualität vorgestellt.
  13. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.00
    0.004194542 = product of:
      0.020972708 = sum of:
        0.020972708 = weight(_text_:1 in 1108) [ClassicSimilarity], result of:
          0.020972708 = score(doc=1108,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.16284466 = fieldWeight in 1108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.046875 = fieldNorm(doc=1108)
      0.2 = coord(1/5)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
  14. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V.: ¬A new approach to the QS university ranking using the composite I-distance indicator : uncertainty and sensitivity analyses (2016) 0.00
    0.004194542 = product of:
      0.020972708 = sum of:
        0.020972708 = weight(_text_:1 in 2500) [ClassicSimilarity], result of:
          0.020972708 = score(doc=2500,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.16284466 = fieldWeight in 2500, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.046875 = fieldNorm(doc=2500)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.200-211
  15. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.00
    0.004194542 = product of:
      0.020972708 = sum of:
        0.020972708 = weight(_text_:1 in 3230) [ClassicSimilarity], result of:
          0.020972708 = score(doc=3230,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.16284466 = fieldWeight in 3230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.2 = coord(1/5)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  16. Bornmann, L.; Daniel, H.-D.: Multiple publication on a single research study: does it pay? : The influence of number of research articles on total citation counts in biomedicine (2007) 0.00
    0.0034954515 = product of:
      0.017477257 = sum of:
        0.017477257 = weight(_text_:1 in 444) [ClassicSimilarity], result of:
          0.017477257 = score(doc=444,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.13570388 = fieldWeight in 444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
      0.2 = coord(1/5)
    
    Abstract
    Scientists may seek to report a single definable body of research in more than one publication, that is, in repeated reports of the same work or in fractional reports, in order to disseminate their research as widely as possible in the scientific community. Up to now, however, it has not been examined whether this strategy of "multiple publication" in fact leads to greater reception of the research. In the present study, we investigate the influence of number of articles reporting the results of a single study on reception in the scientific community (total citation counts of an article on a single study). Our data set consists of 96 applicants for a research fellowship from the Boehringer Ingelheim Fonds (BIF), an international foundation for the promotion of basic research in biomedicine. The applicants reported to us all articles that they had published within the framework of their doctoral research projects. On this single project, the applicants had published from 1 to 16 articles (M = 4; Mdn = 3). The results of a regression model with an interaction term show that the practice of multiple publication of research study results does in fact lead to greater reception of the research (higher total citation counts) in the scientific community. However, reception is dependent upon length of article: the longer the article, the more total citation counts increase with the number of articles. Thus, it pays for scientists to practice multiple publication of study results in the form of sizable reports.
  17. Bornmann, L.; Daniel, H.D.: What do citation counts measure? : a review of studies on citing behavior (2008) 0.00
    0.0034954515 = product of:
      0.017477257 = sum of:
        0.017477257 = weight(_text_:1 in 1729) [ClassicSimilarity], result of:
          0.017477257 = score(doc=1729,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.13570388 = fieldWeight in 1729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1729)
      0.2 = coord(1/5)
    
    Source
    Journal of documentation. 64(2008) no.1, S.45-80
  18. Bornmann, L.; Daniel, H.-D.: Universality of citation distributions : a validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry (2009) 0.00
    0.0034954515 = product of:
      0.017477257 = sum of:
        0.017477257 = weight(_text_:1 in 2954) [ClassicSimilarity], result of:
          0.017477257 = score(doc=2954,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.13570388 = fieldWeight in 2954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2954)
      0.2 = coord(1/5)
    
    Abstract
    In a recently published PNAS paper, Radicchi, Fortunato, and Castellano (2008) propose the relative indicator cf as an unbiased indicator for citation performance across disciplines (fields, subject areas). To calculate cf, the citation rate for a single paper is divided by the average number of citations for all papers in the discipline in which the single paper has been categorized. cf values are said to lead to a universality of discipline-specific citation distributions. Using a comprehensive dataset of an evaluation study on Angewandte Chemie International Edition (AC-IE), we tested the advantage of using this indicator in practical application at the micro level, as compared with (1) simple citation rates, and (2) z-scores, which have been used in psychological testing for many years for normalization of test scores. To calculate z-scores, the mean number of citations of the papers within a discipline is subtracted from the citation rate of a single paper, and the difference is then divided by the citations' standard deviation for a discipline. Our results indicate that z-scores are better suited than cf values to produce universality of discipline-specific citation distributions.
  19. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.00
    0.0034954515 = product of:
      0.017477257 = sum of:
        0.017477257 = weight(_text_:1 in 4132) [ClassicSimilarity], result of:
          0.017477257 = score(doc=4132,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.13570388 = fieldWeight in 4132, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4132)
      0.2 = coord(1/5)
    
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.61-71
  20. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.00
    0.0034954515 = product of:
      0.017477257 = sum of:
        0.017477257 = weight(_text_:1 in 4919) [ClassicSimilarity], result of:
          0.017477257 = score(doc=4919,freq=2.0), product of:
            0.12878966 = queryWeight, product of:
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.052428056 = queryNorm
            0.13570388 = fieldWeight in 4919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4565027 = idf(docFreq=10304, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4919)
      0.2 = coord(1/5)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.