Search (13 results, page 1 of 1)

  • × author_ss:"Bornmann, L."
  1. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.00
    0.0042065145 = product of:
      0.0294456 = sum of:
        0.0294456 = product of:
          0.0588912 = sum of:
            0.0588912 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.0588912 = score(doc=1239,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    18. 3.2014 19:13:22
  2. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.00
    0.0028043431 = product of:
      0.0196304 = sum of:
        0.0196304 = product of:
          0.0392608 = sum of:
            0.0392608 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.0392608 = score(doc=1431,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2014 17:05:18
  3. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.0294456 = score(doc=656,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2013 19:44:17
  4. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.00
    0.0021032572 = product of:
      0.0147228 = sum of:
        0.0147228 = product of:
          0.0294456 = sum of:
            0.0294456 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.0294456 = score(doc=4681,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    8. 1.2019 18:22:45
  5. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.00
    0.0017527144 = product of:
      0.0122690005 = sum of:
        0.0122690005 = product of:
          0.024538001 = sum of:
            0.024538001 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.024538001 = score(doc=4186,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 1.2011 12:51:07
  6. Bornmann, L.: Interrater reliability and convergent validity of F1000Prime peer review (2015) 0.00
    0.0015217565 = product of:
      0.010652295 = sum of:
        0.010652295 = product of:
          0.053261478 = sum of:
            0.053261478 = weight(_text_:system in 2328) [ClassicSimilarity], result of:
              0.053261478 = score(doc=2328,freq=10.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46686378 = fieldWeight in 2328, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2328)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Peer review is the backbone of modern science. F1000Prime is a postpublication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the interrater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on about 100,000 papers with recommendations from faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between faculty members. This result is in agreement with most other studies that have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the faculty members is significantly higher than expected. In addition, better recommendation scores are also associated with higher performing papers.
  7. Egghe, L.; Bornmann, L.: Fallout and miss in journal peer review (2013) 0.00
    0.0010357393 = product of:
      0.007250175 = sum of:
        0.007250175 = product of:
          0.036250874 = sum of:
            0.036250874 = weight(_text_:retrieval in 1759) [ClassicSimilarity], result of:
              0.036250874 = score(doc=1759,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 1759, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1759)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The authors exploit the analogy between journal peer review and information retrieval in order to quantify some imperfections of journal peer review. Design/methodology/approach - The authors define fallout rate and missing rate in order to describe quantitatively the weak papers that were accepted and the strong papers that were missed, respectively. To assess the quality of manuscripts the authors use bibliometric measures. Findings - Fallout rate and missing rate are put in relation with the hitting rate and success rate. Conclusions are drawn on what fraction of weak papers will be accepted in order to have a certain fraction of strong accepted papers. Originality/value - The paper illustrates that these curves are new in peer review research when interpreted in the information retrieval terminology.
  8. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 288) [ClassicSimilarity], result of:
              0.023819257 = score(doc=288,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=288)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
  9. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V.: ¬A new approach to the QS university ranking using the composite I-distance indicator : uncertainty and sensitivity analyses (2016) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 2500) [ClassicSimilarity], result of:
              0.023819257 = score(doc=2500,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2500)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Some major concerns of universities are to provide quality in higher education and enhance global competitiveness, thus ensuring a high global rank and an excellent performance evaluation. This article examines the Quacquarelli Symonds (QS) World University Ranking methodology, pointing to a drawback of using subjective, possibly biased, weightings to build a composite indicator (QS scores). We propose an alternative approach to creating QS scores, which is referred to as the composite I-distance indicator (CIDI) methodology. The main contribution is the proposal of a composite indicator weights correction based on the CIDI methodology. It leads to the improved stability and reduced uncertainty of the QS ranking system. The CIDI methodology is also applicable to other university rankings by proposing a specific statistical approach to creating a composite indicator.
  10. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.00
    6.2775286E-4 = product of:
      0.00439427 = sum of:
        0.00439427 = product of:
          0.02197135 = sum of:
            0.02197135 = weight(_text_:retrieval in 2779) [ClassicSimilarity], result of:
              0.02197135 = score(doc=2779,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20052543 = fieldWeight in 2779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2779)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
  11. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 4132) [ClassicSimilarity], result of:
              0.01984938 = score(doc=4132,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In a comprehensive research project, we investigated the predictive validity of selection decisions and reviewers' ratings at the open access journal Atmospheric Chemistry and Physics (ACP). ACP is a high-impact journal publishing papers on the Earth's atmosphere and the underlying chemical and physical processes. Scientific journals have to deal with the following question concerning the predictive validity: Are in fact the "best" scientific works selected from the manuscripts submitted? In this study we examined whether selecting the "best" manuscripts means selecting papers that after publication show top citation performance as compared to other papers in this research area. First, we appraised the citation impact of later published manuscripts based on the percentile citedness rank classes of the population distribution (scaling in a specific subfield). Second, we analyzed the association between the decisions (n = 677 accepted or rejected, but published elsewhere manuscripts) or ratings (reviewers' ratings for n = 315 manuscripts), respectively, and the citation impact classes of the manuscripts. The results confirm the predictive validity of the ACP peer review system.
  12. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 5225) [ClassicSimilarity], result of:
              0.01984938 = score(doc=5225,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5225)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
  13. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.00
    4.5370017E-4 = product of:
      0.003175901 = sum of:
        0.003175901 = product of:
          0.015879504 = sum of:
            0.015879504 = weight(_text_:system in 2381) [ClassicSimilarity], result of:
              0.015879504 = score(doc=2381,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.13919188 = fieldWeight in 2381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2381)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.