Search (11 results, page 1 of 1)

  • × author_ss:"Bornmann, L."
  1. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.04
    0.037791513 = product of:
      0.075583026 = sum of:
        0.075583026 = sum of:
          0.038589776 = weight(_text_:subject in 656) [ClassicSimilarity], result of:
            0.038589776 = score(doc=656,freq=2.0), product of:
              0.16275941 = queryWeight, product of:
                3.576596 = idf(docFreq=3361, maxDocs=44218)
                0.04550679 = queryNorm
              0.23709705 = fieldWeight in 656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.576596 = idf(docFreq=3361, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
          0.03699325 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
            0.03699325 = score(doc=656,freq=2.0), product of:
              0.15935703 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04550679 = queryNorm
              0.23214069 = fieldWeight in 656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
      0.5 = coord(1/2)
    
    Abstract
    According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance.
    Date
    22. 3.2013 19:44:17
  2. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.02
    0.019294888 = product of:
      0.038589776 = sum of:
        0.038589776 = product of:
          0.07717955 = sum of:
            0.07717955 = weight(_text_:subject in 1109) [ClassicSimilarity], result of:
              0.07717955 = score(doc=1109,freq=8.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.4741941 = fieldWeight in 1109, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
  3. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.018496625 = product of:
      0.03699325 = sum of:
        0.03699325 = product of:
          0.0739865 = sum of:
            0.0739865 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.0739865 = score(doc=1239,freq=2.0), product of:
                0.15935703 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04550679 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
  4. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.01
    0.013643546 = product of:
      0.027287092 = sum of:
        0.027287092 = product of:
          0.054574184 = sum of:
            0.054574184 = weight(_text_:subject in 2779) [ClassicSimilarity], result of:
              0.054574184 = score(doc=2779,freq=4.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.33530587 = fieldWeight in 2779, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2779)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
  5. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.012331083 = product of:
      0.024662167 = sum of:
        0.024662167 = product of:
          0.049324334 = sum of:
            0.049324334 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.049324334 = score(doc=1431,freq=2.0), product of:
                0.15935703 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04550679 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  6. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.01
    0.009248313 = product of:
      0.018496625 = sum of:
        0.018496625 = product of:
          0.03699325 = sum of:
            0.03699325 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.03699325 = score(doc=4681,freq=2.0), product of:
                0.15935703 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04550679 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 1.2019 18:22:45
  7. Bornmann, L.; Daniel, H.-D.: Universality of citation distributions : a validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry (2009) 0.01
    0.008039537 = product of:
      0.016079074 = sum of:
        0.016079074 = product of:
          0.032158148 = sum of:
            0.032158148 = weight(_text_:subject in 2954) [ClassicSimilarity], result of:
              0.032158148 = score(doc=2954,freq=2.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.19758089 = fieldWeight in 2954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2954)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a recently published PNAS paper, Radicchi, Fortunato, and Castellano (2008) propose the relative indicator cf as an unbiased indicator for citation performance across disciplines (fields, subject areas). To calculate cf, the citation rate for a single paper is divided by the average number of citations for all papers in the discipline in which the single paper has been categorized. cf values are said to lead to a universality of discipline-specific citation distributions. Using a comprehensive dataset of an evaluation study on Angewandte Chemie International Edition (AC-IE), we tested the advantage of using this indicator in practical application at the micro level, as compared with (1) simple citation rates, and (2) z-scores, which have been used in psychological testing for many years for normalization of test scores. To calculate z-scores, the mean number of citations of the papers within a discipline is subtracted from the citation rate of a single paper, and the difference is then divided by the citations' standard deviation for a discipline. Our results indicate that z-scores are better suited than cf values to produce universality of discipline-specific citation distributions.
  8. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.01
    0.008039537 = product of:
      0.016079074 = sum of:
        0.016079074 = product of:
          0.032158148 = sum of:
            0.032158148 = weight(_text_:subject in 4919) [ClassicSimilarity], result of:
              0.032158148 = score(doc=4919,freq=2.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.19758089 = fieldWeight in 4919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4919)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
  9. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.01
    0.008039537 = product of:
      0.016079074 = sum of:
        0.016079074 = product of:
          0.032158148 = sum of:
            0.032158148 = weight(_text_:subject in 3231) [ClassicSimilarity], result of:
              0.032158148 = score(doc=3231,freq=2.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.19758089 = fieldWeight in 3231, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3231)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
  10. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.01
    0.0077069276 = product of:
      0.015413855 = sum of:
        0.015413855 = product of:
          0.03082771 = sum of:
            0.03082771 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.03082771 = score(doc=4186,freq=2.0), product of:
                0.15935703 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04550679 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 12:51:07
  11. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.01
    0.0064316294 = product of:
      0.012863259 = sum of:
        0.012863259 = product of:
          0.025726518 = sum of:
            0.025726518 = weight(_text_:subject in 2381) [ClassicSimilarity], result of:
              0.025726518 = score(doc=2381,freq=2.0), product of:
                0.16275941 = queryWeight, product of:
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.04550679 = queryNorm
                0.15806471 = fieldWeight in 2381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.576596 = idf(docFreq=3361, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2381)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.