Search (30 results, page 1 of 2)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.15
    0.15367146 = product of:
      0.23050718 = sum of:
        0.12642162 = weight(_text_:citation in 4681) [ClassicSimilarity], result of:
          0.12642162 = score(doc=4681,freq=6.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5384232 = fieldWeight in 4681, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.104085565 = sum of:
          0.06338157 = weight(_text_:index in 4681) [ClassicSimilarity], result of:
            0.06338157 = score(doc=4681,freq=2.0), product of:
              0.21880072 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.050071523 = queryNorm
              0.28967714 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
          0.040703997 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
            0.040703997 = score(doc=4681,freq=2.0), product of:
              0.17534193 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050071523 = queryNorm
              0.23214069 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
      0.6666667 = coord(2/3)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
  2. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.13
    0.13046543 = product of:
      0.19569814 = sum of:
        0.16856214 = weight(_text_:citation in 1431) [ClassicSimilarity], result of:
          0.16856214 = score(doc=1431,freq=6.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.71789753 = fieldWeight in 1431, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0625 = fieldNorm(doc=1431)
        0.027136 = product of:
          0.054272 = sum of:
            0.054272 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.054272 = score(doc=1431,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
  3. Marx, W.; Bornmann, L.; Cardona, M.: Reference standards and reference multipliers for the comparison of the citation impact of papers published in different time periods (2010) 0.12
    0.12489053 = product of:
      0.18733579 = sum of:
        0.1609268 = weight(_text_:citation in 3998) [ClassicSimilarity], result of:
          0.1609268 = score(doc=3998,freq=14.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.685379 = fieldWeight in 3998, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3998)
        0.026408987 = product of:
          0.052817974 = sum of:
            0.052817974 = weight(_text_:index in 3998) [ClassicSimilarity], result of:
              0.052817974 = score(doc=3998,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.24139762 = fieldWeight in 3998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3998)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this study, reference standards and reference multipliers are suggested as a means to compare the citation impact of earlier research publications in physics (from the period of "Little Science" in the early 20th century) with that of contemporary papers (from the period of "Big Science," beginning around 1960). For the development of time-specific reference standards, the authors determined (a) the mean citation rates of papers in selected physics journals as well as (b) the mean citation rates of all papers in physics published in 1900 (Little Science) and in 2000 (Big Science); this was accomplished by relying on the processes of field-specific standardization in bibliometry. For the sake of developing reference multipliers with which the citation impact of earlier papers can be adjusted to the citation impact of contemporary papers, they combined the reference standards calculated for 1900 and 2000 into their ratio. The use of reference multipliers is demonstrated by means of two examples involving the time adjusted h index values for Max Planck and Albert Einstein.
    Theme
    Citation indexing
  4. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.12
    0.118591204 = product of:
      0.1778868 = sum of:
        0.1609268 = weight(_text_:citation in 4186) [ClassicSimilarity], result of:
          0.1609268 = score(doc=4186,freq=14.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.685379 = fieldWeight in 4186, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
        0.01696 = product of:
          0.03392 = sum of:
            0.03392 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.03392 = score(doc=4186,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  5. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.12
    0.11844659 = product of:
      0.17766988 = sum of:
        0.14597909 = weight(_text_:citation in 1108) [ClassicSimilarity], result of:
          0.14597909 = score(doc=1108,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.62171745 = fieldWeight in 1108, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=1108)
        0.031690784 = product of:
          0.06338157 = sum of:
            0.06338157 = weight(_text_:index in 1108) [ClassicSimilarity], result of:
              0.06338157 = score(doc=1108,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.28967714 = fieldWeight in 1108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1108)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
  6. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.12
    0.11557063 = product of:
      0.17335594 = sum of:
        0.136008 = weight(_text_:citation in 532) [ClassicSimilarity], result of:
          0.136008 = score(doc=532,freq=10.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.57925105 = fieldWeight in 532, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
        0.03734795 = product of:
          0.0746959 = sum of:
            0.0746959 = weight(_text_:index in 532) [ClassicSimilarity], result of:
              0.0746959 = score(doc=532,freq=4.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3413878 = fieldWeight in 532, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=532)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
    Aid
    Science Citation Index
  7. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.10
    0.09784907 = product of:
      0.1467736 = sum of:
        0.12642162 = weight(_text_:citation in 656) [ClassicSimilarity], result of:
          0.12642162 = score(doc=656,freq=6.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5384232 = fieldWeight in 656, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=656)
        0.020351999 = product of:
          0.040703997 = sum of:
            0.040703997 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.040703997 = score(doc=656,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance.
    Date
    22. 3.2013 19:44:17
  8. Bornmann, L.; Mutz, R.; Daniel, H.D.: Do we need the h index and its variants in addition to standard bibliometric measures? (2009) 0.10
    0.0967142 = product of:
      0.1450713 = sum of:
        0.086019 = weight(_text_:citation in 2861) [ClassicSimilarity], result of:
          0.086019 = score(doc=2861,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.36635053 = fieldWeight in 2861, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
        0.059052292 = product of:
          0.118104585 = sum of:
            0.118104585 = weight(_text_:index in 2861) [ClassicSimilarity], result of:
              0.118104585 = score(doc=2861,freq=10.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5397815 = fieldWeight in 2861, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this study, we investigate whether there is a need for the h index and its variants in addition to standard bibliometric measures (SBMs). Results from our recent study (L. Bornmann, R. Mutz, & H.-D. Daniel, 2008) have indicated that there are two types of indices: One type of indices (e.g., h index) describes the most productive core of a scientist's output and informs about the number of papers in the core. The other type of indices (e.g., a index) depicts the impact of the papers in the core. In evaluative bibliometric studies, the two dimensions quantity and quality of output are usually assessed using the SBMs number of publications (for the quantity dimension) and total citation counts (for the impact dimension). We additionally included the SBMs into the factor analysis. The results of the newly calculated analysis indicate that there is a high intercorrelation between number of publications and the indices that load substantially on the factor Quantity of the Productive Core as well as between total citation counts and the indices that load substantially on the factor Impact of the Productive Core. The high-loading indices and SBMs within one performance dimension could be called redundant in empirical application, as high intercorrelations between different indicators are a sign for measuring something similar (or the same). Based on our findings, we propose the use of any pair of indicators (one relating to the number of papers in a researcher's productive core and one relating to the impact of these core papers) as a meaningful approach for comparing scientists.
    Object
    h-Index
  9. Bornmann, L.: Is collaboration among scientists related to the citation impact of papers because their quality increases with collaboration? : an analysis based on data from F1000Prime and normalized citation scores (2017) 0.06
    0.06411479 = product of:
      0.19234435 = sum of:
        0.19234435 = weight(_text_:citation in 3539) [ClassicSimilarity], result of:
          0.19234435 = score(doc=3539,freq=20.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.8191847 = fieldWeight in 3539, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3539)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years, the relationship of collaboration among scientists and the citation impact of papers have been frequently investigated. Most of the studies show that the two variables are closely related: An increasing collaboration activity (measured in terms of number of authors, number of affiliations, and number of countries) is associated with an increased citation impact. However, it is not clear whether the increased citation impact is based on the higher quality of papers that profit from more than one scientist giving expert input or other (citation-specific) factors. Thus, the current study addresses this question by using two comprehensive data sets with publications (in the biomedical area) including quality assessments by experts (F1000Prime member scores) and citation data for the publications. The study is based on more than 15,000 papers. Robust regression models are used to investigate the relationship between number of authors, number of affiliations, and number of countries, respectively, and citation impact-controlling for the papers' quality (measured by F1000Prime expert ratings). The results point out that the effect of collaboration activities on impact is largely independent of the papers' quality. The citation advantage is apparently not quality related; citation-specific factors (e.g., self-citations) seem to be important here.
  10. Leydesdorff, L.; Bornmann, L.; Mutz, R.; Opthof, T.: Turning the tables on citation analysis one more time : principles for comparing sets of documents (2011) 0.05
    0.054403197 = product of:
      0.16320959 = sum of:
        0.16320959 = weight(_text_:citation in 4485) [ClassicSimilarity], result of:
          0.16320959 = score(doc=4485,freq=10.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.69510126 = fieldWeight in 4485, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=4485)
      0.33333334 = coord(1/3)
    
    Abstract
    We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are-as a rule-highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven principal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset. We demonstrate that the proposed family indicators [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
  11. Bornmann, L.; Daniel, H.-D.: Universality of citation distributions : a validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry (2009) 0.05
    0.05364227 = product of:
      0.1609268 = sum of:
        0.1609268 = weight(_text_:citation in 2954) [ClassicSimilarity], result of:
          0.1609268 = score(doc=2954,freq=14.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.685379 = fieldWeight in 2954, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2954)
      0.33333334 = coord(1/3)
    
    Abstract
    In a recently published PNAS paper, Radicchi, Fortunato, and Castellano (2008) propose the relative indicator cf as an unbiased indicator for citation performance across disciplines (fields, subject areas). To calculate cf, the citation rate for a single paper is divided by the average number of citations for all papers in the discipline in which the single paper has been categorized. cf values are said to lead to a universality of discipline-specific citation distributions. Using a comprehensive dataset of an evaluation study on Angewandte Chemie International Edition (AC-IE), we tested the advantage of using this indicator in practical application at the micro level, as compared with (1) simple citation rates, and (2) z-scores, which have been used in psychological testing for many years for normalization of test scores. To calculate z-scores, the mean number of citations of the papers within a discipline is subtracted from the citation rate of a single paper, and the difference is then divided by the citations' standard deviation for a discipline. Our results indicate that z-scores are better suited than cf values to produce universality of discipline-specific citation distributions.
  12. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.05
    0.049663093 = product of:
      0.14898928 = sum of:
        0.14898928 = weight(_text_:citation in 4919) [ClassicSimilarity], result of:
          0.14898928 = score(doc=4919,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 4919, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4919)
      0.33333334 = coord(1/3)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
  13. Bornmann, L.: How much does the expected number of citations for a publication change if it contains the address of a specific scientific institute? : a new approach for the analysis of citation data on the institutional level based on regression models (2016) 0.05
    0.049663093 = product of:
      0.14898928 = sum of:
        0.14898928 = weight(_text_:citation in 3095) [ClassicSimilarity], result of:
          0.14898928 = score(doc=3095,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.6345377 = fieldWeight in 3095, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3095)
      0.33333334 = coord(1/3)
    
    Abstract
    Citation data for institutes are generally provided as numbers of citations or as relative citation rates (as, for example, in the Leiden Ranking). These numbers can then be compared between the institutes. This study aims to present a new approach for the evaluation of citation data at the institutional level, based on regression models. As example data, the study includes all articles and reviews from the Web of Science for the publication year 2003 (n?=?886,416 papers). The study is based on an in-house database of the Max Planck Society. The study investigates how much the expected number of citations for a publication changes if it contains the address of an institute. The calculation of the expected values allows, on the one hand, investigating how the citation impact of the papers of an institute appears in comparison with the total of all papers. On the other hand, the expected values for several institutes can be compared with one another or with a set of randomly selected publications. Besides the institutes, the regression models include factors which can be assumed to have a general influence on citation counts (e.g., the number of authors).
  14. Bornmann, L.; Daniel, H.-D.: Multiple publication on a single research study: does it pay? : The influence of number of research articles on total citation counts in biomedicine (2007) 0.05
    0.045336 = product of:
      0.136008 = sum of:
        0.136008 = weight(_text_:citation in 444) [ClassicSimilarity], result of:
          0.136008 = score(doc=444,freq=10.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.57925105 = fieldWeight in 444, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=444)
      0.33333334 = coord(1/3)
    
    Abstract
    Scientists may seek to report a single definable body of research in more than one publication, that is, in repeated reports of the same work or in fractional reports, in order to disseminate their research as widely as possible in the scientific community. Up to now, however, it has not been examined whether this strategy of "multiple publication" in fact leads to greater reception of the research. In the present study, we investigate the influence of number of articles reporting the results of a single study on reception in the scientific community (total citation counts of an article on a single study). Our data set consists of 96 applicants for a research fellowship from the Boehringer Ingelheim Fonds (BIF), an international foundation for the promotion of basic research in biomedicine. The applicants reported to us all articles that they had published within the framework of their doctoral research projects. On this single project, the applicants had published from 1 to 16 articles (M = 4; Mdn = 3). The results of a regression model with an interaction term show that the practice of multiple publication of research study results does in fact lead to greater reception of the research (higher total citation counts) in the scientific community. However, reception is dependent upon length of article: the longer the article, the more total citation counts increase with the number of articles. Thus, it pays for scientists to practice multiple publication of study results in the form of sizable reports.
    Theme
    Citation indexing
  15. Bornmann, L.; Daniel, H.D.: What do citation counts measure? : a review of studies on citing behavior (2008) 0.04
    0.040549748 = product of:
      0.12164924 = sum of:
        0.12164924 = weight(_text_:citation in 1729) [ClassicSimilarity], result of:
          0.12164924 = score(doc=1729,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5180979 = fieldWeight in 1729, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1729)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non-scientific, reasons. Design/methodology/approach - The review covers research published from the early 1960s up to mid-2005 (approximately 30 studies on citing behavior-reporting results in about 40 publications). Findings - The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non-scientific, factors that play a part in the decision to cite. However, the results of the studies must also be deemed scarcely reliable: the studies vary widely in design, and their results can hardly be replicated. Many of the studies have methodological weaknesses. Furthermore, there is evidence that the different motivations of citers are "not so different or 'randomly given' to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact". Originality/value - Given the increasing importance of evaluative bibliometrics in the world of scholarship, the question "What do citation counts measure?" is a particularly relevant and topical issue.
    Theme
    Citation indexing
  16. Ye, F.Y.; Bornmann, L.: "Smart girls" versus "sleeping beauties" in the sciences : the identification of instant and delayed recognition by using the citation angle (2018) 0.04
    0.040549748 = product of:
      0.12164924 = sum of:
        0.12164924 = weight(_text_:citation in 2160) [ClassicSimilarity], result of:
          0.12164924 = score(doc=2160,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5180979 = fieldWeight in 2160, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2160)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called "sleeping beauties," SBs) or have presented single publications as cases of SBs. Most recently, Ke, Ferrara, Radicchi, and Flammini (2015, Proceedings of the National Academy of Sciences of the USA, 112(24), 7426-7431) proposed the so called "beauty coefficient" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term smart girl (SG) is suggested to differentiate instant credit or "flashes in the pan" from SBs. Although SG and SB are qualitatively defined, the dynamic citation angle ß is introduced in this study as a simple way for identifying SGs and SBs quantitatively - complementing the beauty coefficient B. The citation angles for all articles from 1980 (n?=?166,870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.
  17. Bornmann, L.; Ye, A.; Ye, F.: Identifying landmark publications in the long run using field-normalized citation data (2018) 0.04
    0.040549748 = product of:
      0.12164924 = sum of:
        0.12164924 = weight(_text_:citation in 4196) [ClassicSimilarity], result of:
          0.12164924 = score(doc=4196,freq=8.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.5180979 = fieldWeight in 4196, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4196)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this paper is to propose an approach for identifying landmark papers in the long run. These publications reach a very high level of citation impact and are able to remain on this level across many citing years. In recent years, several studies have been published which deal with the citation history of publications and try to identify landmark publications. Design/methodology/approach In contrast to other studies published hitherto, this study is based on a broad data set with papers published between 1980 and 1990 for identifying the landmark papers. The authors analyzed the citation histories of about five million papers across 25 years. Findings The results of this study reveal that 1,013 papers (less than 0.02 percent) are "outstandingly cited" in the long run. The cluster analyses of the papers show that they received the high impact level very soon after publication and remained on this level over decades. Only a slight impact decline is visible over the years. Originality/value For practical reasons, approaches for identifying landmark papers should be as simple as possible. The approach proposed in this study is based on standard methods in bibliometrics.
  18. Bornmann, L.: How well does a university perform in comparison with its peers? : The use of odds, and odds ratios, for the comparison of institutional citation impact using the Leiden Rankings (2015) 0.04
    0.040142205 = product of:
      0.12042661 = sum of:
        0.12042661 = weight(_text_:citation in 2340) [ClassicSimilarity], result of:
          0.12042661 = score(doc=2340,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.51289076 = fieldWeight in 2340, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2340)
      0.33333334 = coord(1/3)
    
    Abstract
    This study presents the calculation of odds, and odds ratios, for the comparison of the citation impact of universities in the Leiden Ranking. Odds and odds ratios can be used to measure the performance difference between a selected university and competing institutions, or the average of selected competitors, in a relatively simple but clear way.
  19. Bornmann, L.; Haunschild, R.: Relative Citation Ratio (RCR) : an empirical attempt to study a new field-normalized bibliometric indicator (2017) 0.04
    0.040142205 = product of:
      0.12042661 = sum of:
        0.12042661 = weight(_text_:citation in 3541) [ClassicSimilarity], result of:
          0.12042661 = score(doc=3541,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.51289076 = fieldWeight in 3541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3541)
      0.33333334 = coord(1/3)
    
    Abstract
    Hutchins, Yuan, Anderson, and Santangelo (2015) proposed the Relative Citation Ratio (RCR) as a new field-normalized impact indicator. This study investigates the RCR by correlating it on the level of single publications with established field-normalized indicators and assessments of the publications by peers. We find that the RCR correlates highly with established field-normalized indicators, but the correlation between RCR and peer assessments is only low to medium.
  20. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.04
    0.039730474 = product of:
      0.11919142 = sum of:
        0.11919142 = weight(_text_:citation in 2381) [ClassicSimilarity], result of:
          0.11919142 = score(doc=2381,freq=12.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.50763017 = fieldWeight in 2381, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=2381)
      0.33333334 = coord(1/3)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.
    Content
    Vgl. auch: Erratum Re: Selecting manuscripts for a high-impact journal through peer review: A citation analysis of communications that were accepted by Agewandte Chemie International Edition, or rejected but published elsewhere. In: Journal of the American Society for Information Science and Technology 59(2008) no.12, S.2037-2038.
    Theme
    Citation indexing