Search (3 results, page 1 of 1)

  • × year_i:[2000 TO 2010}
  • × author_ss:"Bornmann, L."
  1. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Are there better indices for evaluation purposes than the h index? : a comparison of nine different variants of the h index using data from biomedicine (2008) 0.00
    0.0017956087 = product of:
      0.02513852 = sum of:
        0.02513852 = weight(_text_:bibliographic in 1608) [ClassicSimilarity], result of:
          0.02513852 = score(doc=1608,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.21506234 = fieldWeight in 1608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.071428575 = coord(1/14)
    
    Abstract
    In this study, we examined empirical results on the h index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. The results of a factor analysis using bibliographic data on postdoctoral researchers in biomedicine indicate that regarding the h index and its variants, we are dealing with two types of indices that load on one factor each. One type describes the most productive core of a scientist's output and gives the number of papers in that core. The other type of indices describes the impact of the papers in the core. Because an index for evaluative purposes is a useful yardstick for comparison among scientists if the index corresponds strongly with peer assessments, we calculated a logistic regression analysis with the two factors resulting from the factor analysis as independent variables and peer assessment of the postdoctoral researchers as the dependent variable. The results of the regression analysis show that peer assessments can be predicted better using the factor impact of the productive core than using the factor quantity of the productive core.
  2. Bornmann, L.; Daniel, H.-D.: Universality of citation distributions : a validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry (2009) 0.00
    0.0015155592 = product of:
      0.021217827 = sum of:
        0.021217827 = weight(_text_:subject in 2954) [ClassicSimilarity], result of:
          0.021217827 = score(doc=2954,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.19758089 = fieldWeight in 2954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2954)
      0.071428575 = coord(1/14)
    
    Abstract
    In a recently published PNAS paper, Radicchi, Fortunato, and Castellano (2008) propose the relative indicator cf as an unbiased indicator for citation performance across disciplines (fields, subject areas). To calculate cf, the citation rate for a single paper is divided by the average number of citations for all papers in the discipline in which the single paper has been categorized. cf values are said to lead to a universality of discipline-specific citation distributions. Using a comprehensive dataset of an evaluation study on Angewandte Chemie International Edition (AC-IE), we tested the advantage of using this indicator in practical application at the micro level, as compared with (1) simple citation rates, and (2) z-scores, which have been used in psychological testing for many years for normalization of test scores. To calculate z-scores, the mean number of citations of the papers within a discipline is subtracted from the citation rate of a single paper, and the difference is then divided by the citations' standard deviation for a discipline. Our results indicate that z-scores are better suited than cf values to produce universality of discipline-specific citation distributions.
  3. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.00
    0.0012124473 = product of:
      0.016974261 = sum of:
        0.016974261 = weight(_text_:subject in 2381) [ClassicSimilarity], result of:
          0.016974261 = score(doc=2381,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.15806471 = fieldWeight in 2381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=2381)
      0.071428575 = coord(1/14)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.