Search (60 results, page 1 of 3)

  • × author_ss:"Bornmann, L."
  1. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.03
    0.026394878 = product of:
      0.07918463 = sum of:
        0.015122802 = weight(_text_:of in 1239) [ClassicSimilarity], result of:
          0.015122802 = score(doc=1239,freq=4.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.2932045 = fieldWeight in 1239, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1239)
        0.03724925 = product of:
          0.0744985 = sum of:
            0.0744985 = weight(_text_:problems in 1239) [ClassicSimilarity], result of:
              0.0744985 = score(doc=1239,freq=2.0), product of:
                0.13613719 = queryWeight, product of:
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.03298316 = queryNorm
                0.5472311 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1274753 = idf(docFreq=1937, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
        0.026812578 = product of:
          0.053625155 = sum of:
            0.053625155 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.053625155 = score(doc=1239,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Date
    18. 3.2014 19:13:22
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.866-867
  2. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.02
    0.020788645 = product of:
      0.062365934 = sum of:
        0.017815111 = weight(_text_:library in 4919) [ClassicSimilarity], result of:
          0.017815111 = score(doc=4919,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.2054202 = fieldWeight in 4919, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4919)
        0.017822394 = weight(_text_:of in 4919) [ClassicSimilarity], result of:
          0.017822394 = score(doc=4919,freq=32.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.34554482 = fieldWeight in 4919, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4919)
        0.026728429 = product of:
          0.053456858 = sum of:
            0.053456858 = weight(_text_:etc in 4919) [ClassicSimilarity], result of:
              0.053456858 = score(doc=4919,freq=2.0), product of:
                0.17865302 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2992217 = fieldWeight in 4919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4919)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.11, S.2133-2146
  3. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.01
    0.014664279 = product of:
      0.065989256 = sum of:
        0.0106934365 = weight(_text_:of in 1109) [ClassicSimilarity], result of:
          0.0106934365 = score(doc=1109,freq=8.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.20732689 = fieldWeight in 1109, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1109)
        0.055295818 = product of:
          0.110591635 = sum of:
            0.110591635 = weight(_text_:exercises in 1109) [ClassicSimilarity], result of:
              0.110591635 = score(doc=1109,freq=2.0), product of:
                0.2345736 = queryWeight, product of:
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.03298316 = queryNorm
                0.47145814 = fieldWeight in 1109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.11192 = idf(docFreq=97, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2310-2316
  4. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.01
    0.009509627 = product of:
      0.04279332 = sum of:
        0.016064888 = weight(_text_:of in 532) [ClassicSimilarity], result of:
          0.016064888 = score(doc=532,freq=26.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.31146988 = fieldWeight in 532, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
        0.026728429 = product of:
          0.053456858 = sum of:
            0.053456858 = weight(_text_:etc in 532) [ClassicSimilarity], result of:
              0.053456858 = score(doc=532,freq=2.0), product of:
                0.17865302 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03298316 = queryNorm
                0.2992217 = fieldWeight in 532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=532)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.96-107
  5. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.01
    0.008315175 = product of:
      0.037418287 = sum of:
        0.021378135 = weight(_text_:library in 2779) [ClassicSimilarity], result of:
          0.021378135 = score(doc=2779,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.24650425 = fieldWeight in 2779, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
        0.016040152 = weight(_text_:of in 2779) [ClassicSimilarity], result of:
          0.016040152 = score(doc=2779,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3109903 = fieldWeight in 2779, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
      0.22222222 = coord(2/9)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
    Aid
    Web of Science
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.707-714
  6. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.01
    0.0069293124 = product of:
      0.031181905 = sum of:
        0.017815111 = weight(_text_:library in 3231) [ClassicSimilarity], result of:
          0.017815111 = score(doc=3231,freq=4.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.2054202 = fieldWeight in 3231, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
        0.013366793 = weight(_text_:of in 3231) [ClassicSimilarity], result of:
          0.013366793 = score(doc=3231,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.25915858 = fieldWeight in 3231, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
      0.22222222 = coord(2/9)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3095-3100
  7. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.01
    0.0067364657 = product of:
      0.030314095 = sum of:
        0.016907806 = weight(_text_:of in 4681) [ClassicSimilarity], result of:
          0.016907806 = score(doc=4681,freq=20.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.32781258 = fieldWeight in 4681, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.013406289 = product of:
          0.026812578 = sum of:
            0.026812578 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.026812578 = score(doc=4681,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.2, S.198-201
  8. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.0067161713 = product of:
      0.03022277 = sum of:
        0.012347717 = weight(_text_:of in 1431) [ClassicSimilarity], result of:
          0.012347717 = score(doc=1431,freq=6.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.23940048 = fieldWeight in 1431, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1431)
        0.017875053 = product of:
          0.035750106 = sum of:
            0.035750106 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.035750106 = score(doc=1431,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.9, S.1939-1943
  9. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.01
    0.0065650693 = product of:
      0.029542811 = sum of:
        0.018370902 = weight(_text_:of in 4186) [ClassicSimilarity], result of:
          0.018370902 = score(doc=4186,freq=34.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.35617945 = fieldWeight in 4186, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
        0.011171908 = product of:
          0.022343816 = sum of:
            0.022343816 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.022343816 = score(doc=4186,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.2, S.217-229
  10. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.01
    0.0065436536 = product of:
      0.029446442 = sum of:
        0.016040152 = weight(_text_:of in 656) [ClassicSimilarity], result of:
          0.016040152 = score(doc=656,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.3109903 = fieldWeight in 656, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=656)
        0.013406289 = product of:
          0.026812578 = sum of:
            0.026812578 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.026812578 = score(doc=656,freq=2.0), product of:
                0.11550141 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03298316 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance.
    Date
    22. 3.2013 19:44:17
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.587-595
  11. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.01
    0.0055998936 = product of:
      0.025199521 = sum of:
        0.0125971865 = weight(_text_:library in 3432) [ClassicSimilarity], result of:
          0.0125971865 = score(doc=3432,freq=2.0), product of:
            0.08672522 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03298316 = queryNorm
            0.14525402 = fieldWeight in 3432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3432)
        0.012602335 = weight(_text_:of in 3432) [ClassicSimilarity], result of:
          0.012602335 = score(doc=3432,freq=16.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.24433708 = fieldWeight in 3432, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3432)
      0.22222222 = coord(2/9)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.653-659
  12. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.00
    0.0023763191 = product of:
      0.021386871 = sum of:
        0.021386871 = weight(_text_:of in 1556) [ClassicSimilarity], result of:
          0.021386871 = score(doc=1556,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.41465375 = fieldWeight in 1556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1556)
      0.11111111 = coord(1/9)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2146-2148
  13. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.00
    0.0023763191 = product of:
      0.021386871 = sum of:
        0.021386871 = weight(_text_:of in 2223) [ClassicSimilarity], result of:
          0.021386871 = score(doc=2223,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.41465375 = fieldWeight in 2223, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=2223)
      0.11111111 = coord(1/9)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2146-2148
  14. Bornmann, L.; Daniel, H.-D.: Multiple publication on a single research study: does it pay? : The influence of number of research articles on total citation counts in biomedicine (2007) 0.00
    0.0023220677 = product of:
      0.02089861 = sum of:
        0.02089861 = weight(_text_:of in 444) [ClassicSimilarity], result of:
          0.02089861 = score(doc=444,freq=44.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.40518725 = fieldWeight in 444, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
      0.11111111 = coord(1/9)
    
    Abstract
    Scientists may seek to report a single definable body of research in more than one publication, that is, in repeated reports of the same work or in fractional reports, in order to disseminate their research as widely as possible in the scientific community. Up to now, however, it has not been examined whether this strategy of "multiple publication" in fact leads to greater reception of the research. In the present study, we investigate the influence of number of articles reporting the results of a single study on reception in the scientific community (total citation counts of an article on a single study). Our data set consists of 96 applicants for a research fellowship from the Boehringer Ingelheim Fonds (BIF), an international foundation for the promotion of basic research in biomedicine. The applicants reported to us all articles that they had published within the framework of their doctoral research projects. On this single project, the applicants had published from 1 to 16 articles (M = 4; Mdn = 3). The results of a regression model with an interaction term show that the practice of multiple publication of research study results does in fact lead to greater reception of the research (higher total citation counts) in the scientific community. However, reception is dependent upon length of article: the longer the article, the more total citation counts increase with the number of articles. Thus, it pays for scientists to practice multiple publication of study results in the form of sizable reports.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.8, S.1100-1107
  15. Leydesdorff, L.; Bornmann, L.; Mutz, R.; Opthof, T.: Turning the tables on citation analysis one more time : principles for comparing sets of documents (2011) 0.00
    0.0023008613 = product of:
      0.02070775 = sum of:
        0.02070775 = weight(_text_:of in 4485) [ClassicSimilarity], result of:
          0.02070775 = score(doc=4485,freq=30.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.4014868 = fieldWeight in 4485, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4485)
      0.11111111 = coord(1/9)
    
    Abstract
    We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are-as a rule-highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven principal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset. We demonstrate that the proposed family indicators [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.7, S.1370-1381
  16. Bornmann, L.: What is societal impact of research and how can it be assessed? : a literature survey (2013) 0.00
    0.0022228428 = product of:
      0.020005586 = sum of:
        0.020005586 = weight(_text_:of in 606) [ClassicSimilarity], result of:
          0.020005586 = score(doc=606,freq=28.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.38787308 = fieldWeight in 606, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=606)
      0.11111111 = coord(1/9)
    
    Abstract
    Since the 1990s, the scope of research evaluations becomes broader as the societal products (outputs), societal use (societal references), and societal benefits (changes in society) of research come into scope. Society can reap the benefits of successful research studies only if the results are converted into marketable and consumable products (e.g., medicaments, diagnostic tools, machines, and devices) or services. A series of different names have been introduced which refer to the societal impact of research: third stream activities, societal benefits, societal quality, usefulness, public values, knowledge transfer, and societal relevance. What most of these names are concerned with is the assessment of social, cultural, environmental, and economic returns (impact and effects) from results (research output) or products (research outcome) of publicly funded research. This review intends to present existing research on and practices employed in the assessment of societal impact in the form of a literature survey. The objective is for this review to serve as a basis for the development of robust and reliable methods of societal impact measurement.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.217-233
  17. Bornmann, L.; Daniel, H.-D.: What do we know about the h index? (2007) 0.00
    0.0021917527 = product of:
      0.019725773 = sum of:
        0.019725773 = weight(_text_:of in 477) [ClassicSimilarity], result of:
          0.019725773 = score(doc=477,freq=20.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.38244802 = fieldWeight in 477, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=477)
      0.11111111 = coord(1/9)
    
    Abstract
    Jorge Hirsch recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the scientific community. The claim that the h index in a single number provides a good representation of the scientific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using common literature databases lead to the danger of improper use of the index. We describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.9, S.1381-1385
  18. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.00
    0.002141985 = product of:
      0.019277865 = sum of:
        0.019277865 = weight(_text_:of in 1108) [ClassicSimilarity], result of:
          0.019277865 = score(doc=1108,freq=26.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.37376386 = fieldWeight in 1108, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1108)
      0.11111111 = coord(1/9)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2299-2309
  19. Bornmann, L.; Daniel, H.D.: What do citation counts measure? : a review of studies on citing behavior (2008) 0.00
    0.0021003892 = product of:
      0.018903503 = sum of:
        0.018903503 = weight(_text_:of in 1729) [ClassicSimilarity], result of:
          0.018903503 = score(doc=1729,freq=36.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.36650562 = fieldWeight in 1729, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1729)
      0.11111111 = coord(1/9)
    
    Abstract
    Purpose - The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non-scientific, reasons. Design/methodology/approach - The review covers research published from the early 1960s up to mid-2005 (approximately 30 studies on citing behavior-reporting results in about 40 publications). Findings - The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non-scientific, factors that play a part in the decision to cite. However, the results of the studies must also be deemed scarcely reliable: the studies vary widely in design, and their results can hardly be replicated. Many of the studies have methodological weaknesses. Furthermore, there is evidence that the different motivations of citers are "not so different or 'randomly given' to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact". Originality/value - Given the increasing importance of evaluative bibliometrics in the world of scholarship, the question "What do citation counts measure?" is a particularly relevant and topical issue.
    Source
    Journal of documentation. 64(2008) no.1, S.45-80
  20. Bornmann, L.; Marx, W.: ¬The wisdom of citing scientists (2014) 0.00
    0.0020792792 = product of:
      0.018713512 = sum of:
        0.018713512 = weight(_text_:of in 1293) [ClassicSimilarity], result of:
          0.018713512 = score(doc=1293,freq=18.0), product of:
            0.05157766 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03298316 = queryNorm
            0.36282203 = fieldWeight in 1293, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1293)
      0.11111111 = coord(1/9)
    
    Abstract
    This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be considered to have some credibility. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlates with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say-strictly speaking-that it was obviously of little use for further research. Using a comparison with "like" data, we try to determine that cited reference analysis allows for a more meaningful analysis of bibliometric data than times-cited analysis.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.6, S.1288-1292