Search (15 results, page 1 of 1)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.01
    0.0144263785 = product of:
      0.043279134 = sum of:
        0.029076494 = weight(_text_:b in 4186) [ClassicSimilarity], result of:
          0.029076494 = score(doc=4186,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 4186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
        0.014202639 = product of:
          0.028405279 = sum of:
            0.028405279 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.028405279 = score(doc=4186,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  2. Ye, F.Y.; Bornmann, L.: "Smart girls" versus "sleeping beauties" in the sciences : the identification of instant and delayed recognition by using the citation angle (2018) 0.01
    0.006853395 = product of:
      0.04112037 = sum of:
        0.04112037 = weight(_text_:b in 2160) [ClassicSimilarity], result of:
          0.04112037 = score(doc=2160,freq=4.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.2767939 = fieldWeight in 2160, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2160)
      0.16666667 = coord(1/6)
    
    Abstract
    In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called "sleeping beauties," SBs) or have presented single publications as cases of SBs. Most recently, Ke, Ferrara, Radicchi, and Flammini (2015, Proceedings of the National Academy of Sciences of the USA, 112(24), 7426-7431) proposed the so called "beauty coefficient" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term smart girl (SG) is suggested to differentiate instant credit or "flashes in the pan" from SBs. Although SG and SB are qualitatively defined, the dynamic citation angle ß is introduced in this study as a simple way for identifying SGs and SBs quantitatively - complementing the beauty coefficient B. The citation angles for all articles from 1980 (n?=?166,870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.
  3. Bornmann, L.; Mutz, R.: Growth rates of modern science : a bibliometric analysis based on the number of publications and cited references (2015) 0.01
    0.006853395 = product of:
      0.04112037 = sum of:
        0.04112037 = weight(_text_:b in 2261) [ClassicSimilarity], result of:
          0.04112037 = score(doc=2261,freq=4.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.2767939 = fieldWeight in 2261, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2261)
      0.16666667 = coord(1/6)
    
    Abstract
    Many studies (in information science) have looked at the growth of science. In this study, we reexamine the question of the growth of science. To do this we (a) use current data up to publication year 2012 and (b) analyze the data across all disciplines and also separately for the natural sciences and for the medical and health sciences. Furthermore, the data were analyzed with an advanced statistical technique-segmented regression analysis-which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (a) the number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (b) the number of cited references in the publications of the source items per cited reference year. We looked at the rate at which science has grown since the mid-1600s. In our analysis of cited references we identified three essential growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars, and 8 to 9% to 2010.
  4. Bornmann, L.: Lässt sich die Qualität von Forschung messen? (2013) 0.01
    0.0058152988 = product of:
      0.03489179 = sum of:
        0.03489179 = weight(_text_:b in 928) [ClassicSimilarity], result of:
          0.03489179 = score(doc=928,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.23486741 = fieldWeight in 928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=928)
      0.16666667 = coord(1/6)
    
    Abstract
    Grundsätzlich können wir bei Bewertungen in der Wissenschaft zwischen einer 'qualitative' Form, der Bewertung einer wissenschaftlichen Arbeit (z. B. eines Manuskripts oder Forschungsantrags) durch kompetente Peers, und einer 'quantitative' Form, der Bewertung von wissenschaftlicher Arbeit anhand bibliometrischer Indikatoren unterscheiden. Beide Formen der Bewertung sind nicht unumstritten. Die Kritiker des Peer Review sehen vor allem zwei Schwächen des Verfahrens: (1) Verschiedene Gutachter würden kaum in der Bewertung ein und derselben wissenschaftlichen Arbeit übereinstimmen. (2) Gutachterliche Empfehlungen würden systematische Urteilsverzerrungen aufweisen. Gegen die Verwendung von Zitierhäufigkeiten als Indikator für die Qualität einer wissenschaftlichen Arbeit wird seit Jahren eine Vielzahl von Bedenken geäußert. Zitierhäufigkeiten seien keine 'objektiven' Messungen von wissenschaftlicher Qualität, sondern ein kritisierbares Messkonstrukt. So wird unter anderem kritisiert, dass wissenschaftliche Qualität ein komplexes Phänomen darstelle, das nicht auf einer eindimensionalen Skala (d. h. anhand von Zitierhäufigkeiten) gemessen werden könne. Es werden empirische Ergebnisse zur Reliabilität und Fairness des Peer Review Verfahrens sowie Forschungsergebnisse zur Güte von Zitierhäufigkeiten als Indikator für wissenschaftliche Qualität vorgestellt.
  5. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.01
    0.0058152988 = product of:
      0.03489179 = sum of:
        0.03489179 = weight(_text_:b in 1108) [ClassicSimilarity], result of:
          0.03489179 = score(doc=1108,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.23486741 = fieldWeight in 1108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.046875 = fieldNorm(doc=1108)
      0.16666667 = coord(1/6)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
  6. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.01
    0.0056810556 = product of:
      0.03408633 = sum of:
        0.03408633 = product of:
          0.06817266 = sum of:
            0.06817266 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.06817266 = score(doc=1239,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    18. 3.2014 19:13:22
  7. Marx, W.; Bornmann, L.; Cardona, M.: Reference standards and reference multipliers for the comparison of the citation impact of papers published in different time periods (2010) 0.00
    0.0048460825 = product of:
      0.029076494 = sum of:
        0.029076494 = weight(_text_:b in 3998) [ClassicSimilarity], result of:
          0.029076494 = score(doc=3998,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 3998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3998)
      0.16666667 = coord(1/6)
    
    Abstract
    In this study, reference standards and reference multipliers are suggested as a means to compare the citation impact of earlier research publications in physics (from the period of "Little Science" in the early 20th century) with that of contemporary papers (from the period of "Big Science," beginning around 1960). For the development of time-specific reference standards, the authors determined (a) the mean citation rates of papers in selected physics journals as well as (b) the mean citation rates of all papers in physics published in 1900 (Little Science) and in 2000 (Big Science); this was accomplished by relying on the processes of field-specific standardization in bibliometry. For the sake of developing reference multipliers with which the citation impact of earlier papers can be adjusted to the citation impact of contemporary papers, they combined the reference standards calculated for 1900 and 2000 into their ratio. The use of reference multipliers is demonstrated by means of two examples involving the time adjusted h index values for Max Planck and Albert Einstein.
  8. Bornmann, L.; Marx, W.: ¬The Anna Karenina principle : a way of thinking about success in science (2012) 0.00
    0.0048460825 = product of:
      0.029076494 = sum of:
        0.029076494 = weight(_text_:b in 449) [ClassicSimilarity], result of:
          0.029076494 = score(doc=449,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 449, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=449)
      0.16666667 = coord(1/6)
    
    Abstract
    The first sentence of Leo Tolstoy's (1875-1877/2001) novel Anna Karenina is: "Happy families are all alike; every unhappy family is unhappy in its own way." Here, Tolstoy means that for a family to be happy, several key aspects must be given (e.g., good health of all family members, acceptable financial security, and mutual affection). If there is a deficiency in any one or more of these key aspects, the family will be unhappy. In this article, we introduce the Anna Karenina principle as a way of thinking about success in science in three central areas in (modern) science: (a) peer review of research grant proposals and manuscripts (money and journal space as scarce resources), (b) citation of publications (reception as a scarce resource), and (c) new scientific discoveries (recognition as a scarce resource). If resources are scarce at the highly competitive research front (journal space, funds, reception, and recognition), there can be success only when several key prerequisites for the allocation of the resources are fulfilled. If any one of these prerequisites is not fulfilled, the grant proposal, manuscript submission, the published paper, or the discovery will not be successful.
  9. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.00
    0.0048460825 = product of:
      0.029076494 = sum of:
        0.029076494 = weight(_text_:b in 532) [ClassicSimilarity], result of:
          0.029076494 = score(doc=532,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
      0.16666667 = coord(1/6)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
  10. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.00
    0.0048460825 = product of:
      0.029076494 = sum of:
        0.029076494 = weight(_text_:b in 3231) [ClassicSimilarity], result of:
          0.029076494 = score(doc=3231,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 3231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
      0.16666667 = coord(1/6)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
  11. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    0.0048460825 = product of:
      0.029076494 = sum of:
        0.029076494 = weight(_text_:b in 5225) [ClassicSimilarity], result of:
          0.029076494 = score(doc=5225,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 5225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5225)
      0.16666667 = coord(1/6)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
  12. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.00
    0.0038768656 = product of:
      0.023261193 = sum of:
        0.023261193 = weight(_text_:b in 2381) [ClassicSimilarity], result of:
          0.023261193 = score(doc=2381,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.15657827 = fieldWeight in 2381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=2381)
      0.16666667 = coord(1/6)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.
  13. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.00
    0.0037873704 = product of:
      0.022724222 = sum of:
        0.022724222 = product of:
          0.045448445 = sum of:
            0.045448445 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.045448445 = score(doc=1431,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 8.2014 17:05:18
  14. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.00
    0.0028405278 = product of:
      0.017043166 = sum of:
        0.017043166 = product of:
          0.03408633 = sum of:
            0.03408633 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.03408633 = score(doc=656,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 3.2013 19:44:17
  15. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.00
    0.0028405278 = product of:
      0.017043166 = sum of:
        0.017043166 = product of:
          0.03408633 = sum of:
            0.03408633 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.03408633 = score(doc=4681,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    8. 1.2019 18:22:45