Search (12 results, page 1 of 1)

  • × author_ss:"Bornmann, L."
  1. Bornmann, L.; Daniel, H.-D.: What do we know about the h index? (2007) 0.05
    0.04811207 = product of:
      0.24056034 = sum of:
        0.24056034 = weight(_text_:index in 477) [ClassicSimilarity], result of:
          0.24056034 = score(doc=477,freq=20.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            1.068713 = fieldWeight in 477, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=477)
      0.2 = coord(1/5)
    
    Abstract
    Jorge Hirsch recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the scientific community. The claim that the h index in a single number provides a good representation of the scientific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using common literature databases lead to the danger of improper use of the index. We describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index.
    Object
    H-Index
  2. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.04
    0.04283164 = product of:
      0.1070791 = sum of:
        0.06520444 = weight(_text_:index in 4681) [ClassicSimilarity], result of:
          0.06520444 = score(doc=4681,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.28967714 = fieldWeight in 4681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.04187466 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
          0.04187466 = score(doc=4681,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.23214069 = fieldWeight in 4681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
      0.4 = coord(2/5)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
  3. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Are there better indices for evaluation purposes than the h index? : a comparison of nine different variants of the h index using data from biomedicine (2008) 0.03
    0.026619604 = product of:
      0.13309802 = sum of:
        0.13309802 = weight(_text_:index in 1608) [ClassicSimilarity], result of:
          0.13309802 = score(doc=1608,freq=12.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.591301 = fieldWeight in 1608, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.2 = coord(1/5)
    
    Abstract
    In this study, we examined empirical results on the h index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. The results of a factor analysis using bibliographic data on postdoctoral researchers in biomedicine indicate that regarding the h index and its variants, we are dealing with two types of indices that load on one factor each. One type describes the most productive core of a scientist's output and gives the number of papers in that core. The other type of indices describes the impact of the papers in the core. Because an index for evaluative purposes is a useful yardstick for comparison among scientists if the index corresponds strongly with peer assessments, we calculated a logistic regression analysis with the two factors resulting from the factor analysis as independent variables and peer assessment of the postdoctoral researchers as the dependent variable. The results of the regression analysis show that peer assessments can be predicted better using the factor impact of the productive core than using the factor quantity of the productive core.
  4. Bornmann, L.; Mutz, R.; Daniel, H.D.: Do we need the h index and its variants in addition to standard bibliometric measures? (2009) 0.02
    0.024300262 = product of:
      0.12150131 = sum of:
        0.12150131 = weight(_text_:index in 2861) [ClassicSimilarity], result of:
          0.12150131 = score(doc=2861,freq=10.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.5397815 = fieldWeight in 2861, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.2 = coord(1/5)
    
    Abstract
    In this study, we investigate whether there is a need for the h index and its variants in addition to standard bibliometric measures (SBMs). Results from our recent study (L. Bornmann, R. Mutz, & H.-D. Daniel, 2008) have indicated that there are two types of indices: One type of indices (e.g., h index) describes the most productive core of a scientist's output and informs about the number of papers in the core. The other type of indices (e.g., a index) depicts the impact of the papers in the core. In evaluative bibliometric studies, the two dimensions quantity and quality of output are usually assessed using the SBMs number of publications (for the quantity dimension) and total citation counts (for the impact dimension). We additionally included the SBMs into the factor analysis. The results of the newly calculated analysis indicate that there is a high intercorrelation between number of publications and the indices that load substantially on the factor Quantity of the Productive Core as well as between total citation counts and the indices that load substantially on the factor Impact of the Productive Core. The high-loading indices and SBMs within one performance dimension could be called redundant in empirical application, as high intercorrelations between different indicators are a sign for measuring something similar (or the same). Based on our findings, we propose the use of any pair of indicators (one relating to the number of papers in a researcher's productive core and one relating to the impact of these core papers) as a meaningful approach for comparing scientists.
    Object
    h-Index
  5. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.016749864 = product of:
      0.08374932 = sum of:
        0.08374932 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
          0.08374932 = score(doc=1239,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.46428138 = fieldWeight in 1239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=1239)
      0.2 = coord(1/5)
    
    Date
    18. 3.2014 19:13:22
  6. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.02
    0.015368836 = product of:
      0.07684418 = sum of:
        0.07684418 = weight(_text_:index in 532) [ClassicSimilarity], result of:
          0.07684418 = score(doc=532,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3413878 = fieldWeight in 532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
      0.2 = coord(1/5)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
    Aid
    Science Citation Index
  7. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.02
    0.015368836 = product of:
      0.07684418 = sum of:
        0.07684418 = weight(_text_:index in 3432) [ClassicSimilarity], result of:
          0.07684418 = score(doc=3432,freq=4.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.3413878 = fieldWeight in 3432, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3432)
      0.2 = coord(1/5)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
  8. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.01
    0.013040888 = product of:
      0.06520444 = sum of:
        0.06520444 = weight(_text_:index in 1108) [ClassicSimilarity], result of:
          0.06520444 = score(doc=1108,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.28967714 = fieldWeight in 1108, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=1108)
      0.2 = coord(1/5)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
  9. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.011166576 = product of:
      0.05583288 = sum of:
        0.05583288 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
          0.05583288 = score(doc=1431,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.30952093 = fieldWeight in 1431, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1431)
      0.2 = coord(1/5)
    
    Date
    22. 8.2014 17:05:18
  10. Marx, W.; Bornmann, L.; Cardona, M.: Reference standards and reference multipliers for the comparison of the citation impact of papers published in different time periods (2010) 0.01
    0.010867408 = product of:
      0.054337036 = sum of:
        0.054337036 = weight(_text_:index in 3998) [ClassicSimilarity], result of:
          0.054337036 = score(doc=3998,freq=2.0), product of:
            0.2250935 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.051511593 = queryNorm
            0.24139762 = fieldWeight in 3998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3998)
      0.2 = coord(1/5)
    
    Abstract
    In this study, reference standards and reference multipliers are suggested as a means to compare the citation impact of earlier research publications in physics (from the period of "Little Science" in the early 20th century) with that of contemporary papers (from the period of "Big Science," beginning around 1960). For the development of time-specific reference standards, the authors determined (a) the mean citation rates of papers in selected physics journals as well as (b) the mean citation rates of all papers in physics published in 1900 (Little Science) and in 2000 (Big Science); this was accomplished by relying on the processes of field-specific standardization in bibliometry. For the sake of developing reference multipliers with which the citation impact of earlier papers can be adjusted to the citation impact of contemporary papers, they combined the reference standards calculated for 1900 and 2000 into their ratio. The use of reference multipliers is demonstrated by means of two examples involving the time adjusted h index values for Max Planck and Albert Einstein.
  11. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.01
    0.008374932 = product of:
      0.04187466 = sum of:
        0.04187466 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
          0.04187466 = score(doc=656,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.23214069 = fieldWeight in 656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=656)
      0.2 = coord(1/5)
    
    Date
    22. 3.2013 19:44:17
  12. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.01
    0.00697911 = product of:
      0.03489555 = sum of:
        0.03489555 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
          0.03489555 = score(doc=4186,freq=2.0), product of:
            0.18038483 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051511593 = queryNorm
            0.19345059 = fieldWeight in 4186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
      0.2 = coord(1/5)
    
    Date
    22. 1.2011 12:51:07