Search (61 results, page 1 of 4)

  • × author_ss:"Bornmann, L."
  1. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.05
    0.047796495 = product of:
      0.09559299 = sum of:
        0.09559299 = sum of:
          0.009350315 = weight(_text_:a in 1239) [ClassicSimilarity], result of:
            0.009350315 = score(doc=1239,freq=2.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.15287387 = fieldWeight in 1239, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.09375 = fieldNorm(doc=1239)
          0.086242676 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
            0.086242676 = score(doc=1239,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.46428138 = fieldWeight in 1239, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=1239)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
    Type
    a
  2. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.05
    0.04695951 = sum of:
      0.043063544 = product of:
        0.17225417 = sum of:
          0.17225417 = weight(_text_:authors in 3231) [ClassicSimilarity], result of:
            0.17225417 = score(doc=3231,freq=16.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.7123147 = fieldWeight in 3231, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3231)
        0.25 = coord(1/4)
      0.0038959642 = product of:
        0.0077919285 = sum of:
          0.0077919285 = weight(_text_:a in 3231) [ClassicSimilarity], result of:
            0.0077919285 = score(doc=3231,freq=8.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.12739488 = fieldWeight in 3231, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3231)
        0.5 = coord(1/2)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
    Type
    a
  3. Egghe, L.; Bornmann, L.: Fallout and miss in journal peer review (2013) 0.04
    0.040776104 = sum of:
      0.036919296 = product of:
        0.14767718 = sum of:
          0.14767718 = weight(_text_:authors in 1759) [ClassicSimilarity], result of:
            0.14767718 = score(doc=1759,freq=6.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.61068267 = fieldWeight in 1759, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1759)
        0.25 = coord(1/4)
      0.0038568082 = product of:
        0.0077136164 = sum of:
          0.0077136164 = weight(_text_:a in 1759) [ClassicSimilarity], result of:
            0.0077136164 = score(doc=1759,freq=4.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.12611452 = fieldWeight in 1759, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1759)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The authors exploit the analogy between journal peer review and information retrieval in order to quantify some imperfections of journal peer review. Design/methodology/approach - The authors define fallout rate and missing rate in order to describe quantitatively the weak papers that were accepted and the strong papers that were missed, respectively. To assess the quality of manuscripts the authors use bibliometric measures. Findings - Fallout rate and missing rate are put in relation with the hitting rate and success rate. Conclusions are drawn on what fraction of weak papers will be accepted in order to have a certain fraction of strong accepted papers. Originality/value - The paper illustrates that these curves are new in peer review research when interpreted in the information retrieval terminology.
    Type
    a
  4. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.03
    0.0349811 = product of:
      0.0699622 = sum of:
        0.0699622 = sum of:
          0.012467085 = weight(_text_:a in 1431) [ClassicSimilarity], result of:
            0.012467085 = score(doc=1431,freq=8.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.20383182 = fieldWeight in 1431, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=1431)
          0.05749512 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
            0.05749512 = score(doc=1431,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.30952093 = fieldWeight in 1431, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1431)
      0.5 = coord(1/2)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
    Type
    a
  5. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.03
    0.026787654 = product of:
      0.053575307 = sum of:
        0.053575307 = sum of:
          0.01045397 = weight(_text_:a in 4681) [ClassicSimilarity], result of:
            0.01045397 = score(doc=4681,freq=10.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.1709182 = fieldWeight in 4681, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
          0.043121338 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
            0.043121338 = score(doc=4681,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.23214069 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
      0.5 = coord(1/2)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
    Type
    a
  6. Bornmann, L.; Leydesdorff, L.: Which cities produce more excellent papers than can be expected? : a new mapping approach, using Google Maps, based on statistical significance testing (2011) 0.02
    0.024454964 = sum of:
      0.018270312 = product of:
        0.07308125 = sum of:
          0.07308125 = weight(_text_:authors in 4767) [ClassicSimilarity], result of:
            0.07308125 = score(doc=4767,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.30220953 = fieldWeight in 4767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=4767)
        0.25 = coord(1/4)
      0.0061846524 = product of:
        0.012369305 = sum of:
          0.012369305 = weight(_text_:a in 4767) [ClassicSimilarity], result of:
            0.012369305 = score(doc=4767,freq=14.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.20223314 = fieldWeight in 4767, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4767)
        0.5 = coord(1/2)
    
    Abstract
    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were published more frequently than can be expected. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. Top performers in output are cities in which authors are located who publish a statistically significant higher number of highly cited papers than can be expected for these cities. As sample data for physics, chemistry, and psychology show, these cities do not necessarily have a high output of highly cited papers.
    Type
    a
  7. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.02
    0.023898248 = product of:
      0.047796495 = sum of:
        0.047796495 = sum of:
          0.0046751574 = weight(_text_:a in 656) [ClassicSimilarity], result of:
            0.0046751574 = score(doc=656,freq=2.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.07643694 = fieldWeight in 656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
          0.043121338 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
            0.043121338 = score(doc=656,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.23214069 = fieldWeight in 656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:44:17
    Type
    a
  8. Bornmann, L.: Is collaboration among scientists related to the citation impact of papers because their quality increases with collaboration? : an analysis based on data from F1000Prime and normalized citation scores (2017) 0.02
    0.023479754 = sum of:
      0.021531772 = product of:
        0.08612709 = sum of:
          0.08612709 = weight(_text_:authors in 3539) [ClassicSimilarity], result of:
            0.08612709 = score(doc=3539,freq=4.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.35615736 = fieldWeight in 3539, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3539)
        0.25 = coord(1/4)
      0.0019479821 = product of:
        0.0038959642 = sum of:
          0.0038959642 = weight(_text_:a in 3539) [ClassicSimilarity], result of:
            0.0038959642 = score(doc=3539,freq=2.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.06369744 = fieldWeight in 3539, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3539)
        0.5 = coord(1/2)
    
    Abstract
    In recent years, the relationship of collaboration among scientists and the citation impact of papers have been frequently investigated. Most of the studies show that the two variables are closely related: An increasing collaboration activity (measured in terms of number of authors, number of affiliations, and number of countries) is associated with an increased citation impact. However, it is not clear whether the increased citation impact is based on the higher quality of papers that profit from more than one scientist giving expert input or other (citation-specific) factors. Thus, the current study addresses this question by using two comprehensive data sets with publications (in the biomedical area) including quality assessments by experts (F1000Prime member scores) and citation data for the publications. The study is based on more than 15,000 papers. Robust regression models are used to investigate the relationship between number of authors, number of affiliations, and number of countries, respectively, and citation impact-controlling for the papers' quality (measured by F1000Prime expert ratings). The results point out that the effect of collaboration activities on impact is largely independent of the papers' quality. The citation advantage is apparently not quality related; citation-specific factors (e.g., self-citations) seem to be important here.
    Type
    a
  9. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.02
    0.022323046 = product of:
      0.04464609 = sum of:
        0.04464609 = sum of:
          0.008711642 = weight(_text_:a in 4186) [ClassicSimilarity], result of:
            0.008711642 = score(doc=4186,freq=10.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.14243183 = fieldWeight in 4186, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4186)
          0.035934452 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
            0.035934452 = score(doc=4186,freq=2.0), product of:
              0.1857552 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.053045183 = queryNorm
              0.19345059 = fieldWeight in 4186, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4186)
      0.5 = coord(1/2)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
    Type
    a
  10. Bornmann, L.: How much does the expected number of citations for a publication change if it contains the address of a specific scientific institute? : a new approach for the analysis of citation data on the institutional level based on regression models (2016) 0.02
    0.020734986 = sum of:
      0.0152252605 = product of:
        0.060901042 = sum of:
          0.060901042 = weight(_text_:authors in 3095) [ClassicSimilarity], result of:
            0.060901042 = score(doc=3095,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.25184128 = fieldWeight in 3095, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3095)
        0.25 = coord(1/4)
      0.005509726 = product of:
        0.011019452 = sum of:
          0.011019452 = weight(_text_:a in 3095) [ClassicSimilarity], result of:
            0.011019452 = score(doc=3095,freq=16.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.18016359 = fieldWeight in 3095, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3095)
        0.5 = coord(1/2)
    
    Abstract
    Citation data for institutes are generally provided as numbers of citations or as relative citation rates (as, for example, in the Leiden Ranking). These numbers can then be compared between the institutes. This study aims to present a new approach for the evaluation of citation data at the institutional level, based on regression models. As example data, the study includes all articles and reviews from the Web of Science for the publication year 2003 (n?=?886,416 papers). The study is based on an in-house database of the Max Planck Society. The study investigates how much the expected number of citations for a publication changes if it contains the address of an institute. The calculation of the expected values allows, on the one hand, investigating how the citation impact of the papers of an institute appears in comparison with the total of all papers. On the other hand, the expected values for several institutes can be compared with one another or with a set of randomly selected publications. Besides the institutes, the regression models include factors which can be assumed to have a general influence on citation counts (e.g., the number of authors).
    Type
    a
  11. Bornmann, L.; Ye, A.; Ye, F.: Identifying landmark publications in the long run using field-normalized citation data (2018) 0.02
    0.019581081 = sum of:
      0.0152252605 = product of:
        0.060901042 = sum of:
          0.060901042 = weight(_text_:authors in 4196) [ClassicSimilarity], result of:
            0.060901042 = score(doc=4196,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.25184128 = fieldWeight in 4196, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4196)
        0.25 = coord(1/4)
      0.004355821 = product of:
        0.008711642 = sum of:
          0.008711642 = weight(_text_:a in 4196) [ClassicSimilarity], result of:
            0.008711642 = score(doc=4196,freq=10.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.14243183 = fieldWeight in 4196, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4196)
        0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to propose an approach for identifying landmark papers in the long run. These publications reach a very high level of citation impact and are able to remain on this level across many citing years. In recent years, several studies have been published which deal with the citation history of publications and try to identify landmark publications. Design/methodology/approach In contrast to other studies published hitherto, this study is based on a broad data set with papers published between 1980 and 1990 for identifying the landmark papers. The authors analyzed the citation histories of about five million papers across 25 years. Findings The results of this study reveal that 1,013 papers (less than 0.02 percent) are "outstandingly cited" in the long run. The cluster analyses of the papers show that they received the high impact level very soon after publication and remained on this level over decades. Only a slight impact decline is visible over the years. Originality/value For practical reasons, approaches for identifying landmark papers should be as simple as possible. The approach proposed in this study is based on standard methods in bibliometrics.
    Type
    a
  12. Bornmann, L.; Wagner, C.; Leydesdorff, L.: BRICS countries and scientific excellence : a bibliometric analysis of most frequently cited papers (2015) 0.02
    0.019121224 = sum of:
      0.0152252605 = product of:
        0.060901042 = sum of:
          0.060901042 = weight(_text_:authors in 2047) [ClassicSimilarity], result of:
            0.060901042 = score(doc=2047,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.25184128 = fieldWeight in 2047, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2047)
        0.25 = coord(1/4)
      0.0038959642 = product of:
        0.0077919285 = sum of:
          0.0077919285 = weight(_text_:a in 2047) [ClassicSimilarity], result of:
            0.0077919285 = score(doc=2047,freq=8.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.12739488 = fieldWeight in 2047, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2047)
        0.5 = coord(1/2)
    
    Abstract
    The BRICS countries (Brazil, Russia, India, China, and South Africa) are notable for their increasing participation in science and technology. The governments of these countries have been boosting their investments in research and development to become part of the group of nations doing research at a world-class level. This study investigates the development of the BRICS countries in the domain of top-cited papers (top 10% and 1% most frequently cited papers) between 1990 and 2010. To assess the extent to which these countries have become important players at the top level, we compare the BRICS countries with the top-performing countries worldwide. As the analyses of the (annual) growth rates show, with the exception of Russia, the BRICS countries have increased their output in terms of most frequently cited papers at a higher rate than the top-cited countries worldwide. By way of additional analysis, we generate coauthorship networks among authors of highly cited papers for 4 time points to view changes in BRICS participation (1995, 2000, 2005, and 2010). Here, the results show that all BRICS countries succeeded in becoming part of this network, whereby the Chinese collaboration activities focus on the US.
    Type
    a
  13. Marx, W.; Bornmann, L.; Cardona, M.: Reference standards and reference multipliers for the comparison of the citation impact of papers published in different time periods (2010) 0.02
    0.018599264 = sum of:
      0.0152252605 = product of:
        0.060901042 = sum of:
          0.060901042 = weight(_text_:authors in 3998) [ClassicSimilarity], result of:
            0.060901042 = score(doc=3998,freq=2.0), product of:
              0.24182312 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.053045183 = queryNorm
              0.25184128 = fieldWeight in 3998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3998)
        0.25 = coord(1/4)
      0.0033740045 = product of:
        0.006748009 = sum of:
          0.006748009 = weight(_text_:a in 3998) [ClassicSimilarity], result of:
            0.006748009 = score(doc=3998,freq=6.0), product of:
              0.06116359 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.053045183 = queryNorm
              0.11032722 = fieldWeight in 3998, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3998)
        0.5 = coord(1/2)
    
    Abstract
    In this study, reference standards and reference multipliers are suggested as a means to compare the citation impact of earlier research publications in physics (from the period of "Little Science" in the early 20th century) with that of contemporary papers (from the period of "Big Science," beginning around 1960). For the development of time-specific reference standards, the authors determined (a) the mean citation rates of papers in selected physics journals as well as (b) the mean citation rates of all papers in physics published in 1900 (Little Science) and in 2000 (Big Science); this was accomplished by relying on the processes of field-specific standardization in bibliometry. For the sake of developing reference multipliers with which the citation impact of earlier papers can be adjusted to the citation impact of contemporary papers, they combined the reference standards calculated for 1900 and 2000 into their ratio. The use of reference multipliers is demonstrated by means of two examples involving the time adjusted h index values for Max Planck and Albert Einstein.
    Type
    a
  14. Bornmann, L.; Marx, W.: ¬The wisdom of citing scientists (2014) 0.00
    0.0033400937 = product of:
      0.0066801873 = sum of:
        0.0066801873 = product of:
          0.013360375 = sum of:
            0.013360375 = weight(_text_:a in 1293) [ClassicSimilarity], result of:
              0.013360375 = score(doc=1293,freq=12.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.21843673 = fieldWeight in 1293, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1293)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be considered to have some credibility. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlates with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say-strictly speaking-that it was obviously of little use for further research. Using a comparison with "like" data, we try to determine that cited reference analysis allows for a more meaningful analysis of bibliometric data than times-cited analysis.
    Type
    a
  15. Bornmann, L.; Leydesdorff, L.: Statistical tests and research assessments : a comment on Schneider (2012) (2013) 0.00
    0.0033058354 = product of:
      0.006611671 = sum of:
        0.006611671 = product of:
          0.013223342 = sum of:
            0.013223342 = weight(_text_:a in 752) [ClassicSimilarity], result of:
              0.013223342 = score(doc=752,freq=4.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.2161963 = fieldWeight in 752, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Bornmann, L.: Is there currently a scientific revolution in Scientometrics? (2014) 0.00
    0.0033058354 = product of:
      0.006611671 = sum of:
        0.006611671 = product of:
          0.013223342 = sum of:
            0.013223342 = weight(_text_:a in 1206) [ClassicSimilarity], result of:
              0.013223342 = score(doc=1206,freq=4.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.2161963 = fieldWeight in 1206, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1206)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  17. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V.: ¬A new approach to the QS university ranking using the composite I-distance indicator : uncertainty and sensitivity analyses (2016) 0.00
    0.0033058354 = product of:
      0.006611671 = sum of:
        0.006611671 = product of:
          0.013223342 = sum of:
            0.013223342 = weight(_text_:a in 2500) [ClassicSimilarity], result of:
              0.013223342 = score(doc=2500,freq=16.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.2161963 = fieldWeight in 2500, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Some major concerns of universities are to provide quality in higher education and enhance global competitiveness, thus ensuring a high global rank and an excellent performance evaluation. This article examines the Quacquarelli Symonds (QS) World University Ranking methodology, pointing to a drawback of using subjective, possibly biased, weightings to build a composite indicator (QS scores). We propose an alternative approach to creating QS scores, which is referred to as the composite I-distance indicator (CIDI) methodology. The main contribution is the proposal of a composite indicator weights correction based on the CIDI methodology. It leads to the improved stability and reduced uncertainty of the QS ranking system. The CIDI methodology is also applicable to other university rankings by proposing a specific statistical approach to creating a composite indicator.
    Type
    a
  18. Bornmann, L.: What do altmetrics counts mean? : a plea for content analyses (2016) 0.00
    0.0033058354 = product of:
      0.006611671 = sum of:
        0.006611671 = product of:
          0.013223342 = sum of:
            0.013223342 = weight(_text_:a in 2858) [ClassicSimilarity], result of:
              0.013223342 = score(doc=2858,freq=4.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.2161963 = fieldWeight in 2858, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 1556) [ClassicSimilarity], result of:
              0.012467085 = score(doc=1556,freq=8.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 1556, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1556)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
    Type
    a
  20. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.00
    0.0031167713 = product of:
      0.0062335427 = sum of:
        0.0062335427 = product of:
          0.012467085 = sum of:
            0.012467085 = weight(_text_:a in 2223) [ClassicSimilarity], result of:
              0.012467085 = score(doc=2223,freq=8.0), product of:
                0.06116359 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.053045183 = queryNorm
                0.20383182 = fieldWeight in 2223, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2223)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
    Type
    a