Search (11 results, page 1 of 1)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.05
    0.047951292 = product of:
      0.16782951 = sum of:
        0.15130357 = weight(_text_:government in 4681) [ClassicSimilarity], result of:
          0.15130357 = score(doc=4681,freq=6.0), product of:
            0.23146805 = queryWeight, product of:
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.04065836 = queryNorm
            0.65366936 = fieldWeight in 4681, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.6930003 = idf(docFreq=404, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.01652594 = product of:
          0.03305188 = sum of:
            0.03305188 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.03305188 = score(doc=4681,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
  2. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.02
    0.017228428 = product of:
      0.120598994 = sum of:
        0.120598994 = weight(_text_:networks in 3230) [ClassicSimilarity], result of:
          0.120598994 = score(doc=3230,freq=8.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.6271047 = fieldWeight in 3230, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.14285715 = coord(1/7)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  3. Marx, W.; Bornmann, L.; Cardona, M.: Reference standards and reference multipliers for the comparison of the citation impact of papers published in different time periods (2010) 0.01
    0.012747741 = product of:
      0.08923419 = sum of:
        0.08923419 = weight(_text_:standards in 3998) [ClassicSimilarity], result of:
          0.08923419 = score(doc=3998,freq=8.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.49242854 = fieldWeight in 3998, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3998)
      0.14285715 = coord(1/7)
    
    Abstract
    In this study, reference standards and reference multipliers are suggested as a means to compare the citation impact of earlier research publications in physics (from the period of "Little Science" in the early 20th century) with that of contemporary papers (from the period of "Big Science," beginning around 1960). For the development of time-specific reference standards, the authors determined (a) the mean citation rates of papers in selected physics journals as well as (b) the mean citation rates of all papers in physics published in 1900 (Little Science) and in 2000 (Big Science); this was accomplished by relying on the processes of field-specific standardization in bibliometry. For the sake of developing reference multipliers with which the citation impact of earlier papers can be adjusted to the citation impact of contemporary papers, they combined the reference standards calculated for 1900 and 2000 into their ratio. The use of reference multipliers is demonstrated by means of two examples involving the time adjusted h index values for Max Planck and Albert Einstein.
  4. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.01
    0.010198194 = product of:
      0.07138735 = sum of:
        0.07138735 = weight(_text_:standards in 2381) [ClassicSimilarity], result of:
          0.07138735 = score(doc=2381,freq=8.0), product of:
            0.18121246 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.04065836 = queryNorm
            0.39394283 = fieldWeight in 2381, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.03125 = fieldNorm(doc=2381)
      0.14285715 = coord(1/7)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.
  5. Bornmann, L.; Wagner, C.; Leydesdorff, L.: BRICS countries and scientific excellence : a bibliometric analysis of most frequently cited papers (2015) 0.01
    0.007178512 = product of:
      0.05024958 = sum of:
        0.05024958 = weight(_text_:networks in 2047) [ClassicSimilarity], result of:
          0.05024958 = score(doc=2047,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.26129362 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2047)
      0.14285715 = coord(1/7)
    
    Abstract
    The BRICS countries (Brazil, Russia, India, China, and South Africa) are notable for their increasing participation in science and technology. The governments of these countries have been boosting their investments in research and development to become part of the group of nations doing research at a world-class level. This study investigates the development of the BRICS countries in the domain of top-cited papers (top 10% and 1% most frequently cited papers) between 1990 and 2010. To assess the extent to which these countries have become important players at the top level, we compare the BRICS countries with the top-performing countries worldwide. As the analyses of the (annual) growth rates show, with the exception of Russia, the BRICS countries have increased their output in terms of most frequently cited papers at a higher rate than the top-cited countries worldwide. By way of additional analysis, we generate coauthorship networks among authors of highly cited papers for 4 time points to view changes in BRICS participation (1995, 2000, 2005, and 2010). Here, the results show that all BRICS countries succeeded in becoming part of this network, whereby the Chinese collaboration activities focus on the US.
  6. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.01
    0.007178512 = product of:
      0.05024958 = sum of:
        0.05024958 = weight(_text_:networks in 5225) [ClassicSimilarity], result of:
          0.05024958 = score(doc=5225,freq=2.0), product of:
            0.19231078 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.04065836 = queryNorm
            0.26129362 = fieldWeight in 5225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5225)
      0.14285715 = coord(1/7)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
  7. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.01
    0.006522866 = product of:
      0.04566006 = sum of:
        0.04566006 = product of:
          0.09132012 = sum of:
            0.09132012 = weight(_text_:policy in 4919) [ClassicSimilarity], result of:
              0.09132012 = score(doc=4919,freq=4.0), product of:
                0.21800333 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.04065836 = queryNorm
                0.41889322 = fieldWeight in 4919, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4919)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
  8. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.00
    0.004721697 = product of:
      0.03305188 = sum of:
        0.03305188 = product of:
          0.06610376 = sum of:
            0.06610376 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.06610376 = score(doc=1239,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    18. 3.2014 19:13:22
  9. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.00
    0.0031477981 = product of:
      0.022034585 = sum of:
        0.022034585 = product of:
          0.04406917 = sum of:
            0.04406917 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.04406917 = score(doc=1431,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2014 17:05:18
  10. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.00
    0.0023608485 = product of:
      0.01652594 = sum of:
        0.01652594 = product of:
          0.03305188 = sum of:
            0.03305188 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.03305188 = score(doc=656,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2013 19:44:17
  11. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.00
    0.0019673738 = product of:
      0.013771616 = sum of:
        0.013771616 = product of:
          0.027543232 = sum of:
            0.027543232 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.027543232 = score(doc=4186,freq=2.0), product of:
                0.14237864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04065836 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 1.2011 12:51:07