Search (54 results, page 2 of 3)

  • × author_ss:"Bornmann, L."
  • × year_i:[2010 TO 2020}
  1. Bornmann, L.: Interrater reliability and convergent validity of F1000Prime peer review (2015) 0.00
    0.0012033792 = product of:
      0.013237171 = sum of:
        0.0050450475 = weight(_text_:in in 2328) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=2328,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 2328, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2328)
        0.008192123 = product of:
          0.016384246 = sum of:
            0.016384246 = weight(_text_:science in 2328) [ClassicSimilarity], result of:
              0.016384246 = score(doc=2328,freq=6.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.30244917 = fieldWeight in 2328, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2328)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Peer review is the backbone of modern science. F1000Prime is a postpublication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the interrater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on about 100,000 papers with recommendations from faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between faculty members. This result is in agreement with most other studies that have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the faculty members is significantly higher than expected. In addition, better recommendation scores are also associated with higher performing papers.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2415-2426
  2. Bornmann, L.: Complex tasks and simple solutions : the use of heuristics in the evaluation of research (2015) 0.00
    0.0010988256 = product of:
      0.012087081 = sum of:
        0.0042042066 = weight(_text_:in in 8911) [ClassicSimilarity], result of:
          0.0042042066 = score(doc=8911,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.15028831 = fieldWeight in 8911, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=8911)
        0.0078828735 = product of:
          0.015765747 = sum of:
            0.015765747 = weight(_text_:science in 8911) [ClassicSimilarity], result of:
              0.015765747 = score(doc=8911,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.2910318 = fieldWeight in 8911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8911)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.8, S.1738-1739
  3. Bornmann, L.; Thor, A.; Marx, W.; Schier, H.: ¬The application of bibliometrics to research evaluation in the humanities and social sciences : an exploratory study using normalized Google Scholar data for the publications of a research institute (2016) 0.00
    0.0010800312 = product of:
      0.011880342 = sum of:
        0.0063063093 = weight(_text_:in in 3160) [ClassicSimilarity], result of:
          0.0063063093 = score(doc=3160,freq=18.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.22543246 = fieldWeight in 3160, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3160)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 3160) [ClassicSimilarity], result of:
              0.011148066 = score(doc=3160,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 3160, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3160)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. This study uses a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS). In order to use GS for a bibliometric study, we developed procedures for the normalization of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we calculated normalized scores for a subset of the publications based on data from the Web of Science (WoS) and Scopus. Even if scores calculated with the help of GS and the WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute's papers whose journals are covered in the WoS are cited at about an average rate (compared with the other papers in the journals).
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.11, S.2778-2789
  4. Bornmann, L.; Leydesdorff, L.: Which cities produce more excellent papers than can be expected? : a new mapping approach, using Google Maps, based on statistical significance testing (2011) 0.00
    0.0010667171 = product of:
      0.011733888 = sum of:
        0.0050450475 = weight(_text_:in in 4767) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=4767,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 4767, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4767)
        0.00668884 = product of:
          0.01337768 = sum of:
            0.01337768 = weight(_text_:science in 4767) [ClassicSimilarity], result of:
              0.01337768 = score(doc=4767,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.24694869 = fieldWeight in 4767, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4767)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were published more frequently than can be expected. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. Top performers in output are cities in which authors are located who publish a statistically significant higher number of highly cited papers than can be expected for these cities. As sample data for physics, chemistry, and psychology show, these cities do not necessarily have a high output of highly cited papers.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.10, S.1954-1962
  5. Bornmann, L.: What is societal impact of research and how can it be assessed? : a literature survey (2013) 0.00
    0.0010667171 = product of:
      0.011733888 = sum of:
        0.0050450475 = weight(_text_:in in 606) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=606,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=606)
        0.00668884 = product of:
          0.01337768 = sum of:
            0.01337768 = weight(_text_:science in 606) [ClassicSimilarity], result of:
              0.01337768 = score(doc=606,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.24694869 = fieldWeight in 606, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=606)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Since the 1990s, the scope of research evaluations becomes broader as the societal products (outputs), societal use (societal references), and societal benefits (changes in society) of research come into scope. Society can reap the benefits of successful research studies only if the results are converted into marketable and consumable products (e.g., medicaments, diagnostic tools, machines, and devices) or services. A series of different names have been introduced which refer to the societal impact of research: third stream activities, societal benefits, societal quality, usefulness, public values, knowledge transfer, and societal relevance. What most of these names are concerned with is the assessment of social, cultural, environmental, and economic returns (impact and effects) from results (research output) or products (research outcome) of publicly funded research. This review intends to present existing research on and practices employed in the assessment of societal impact in the form of a literature survey. The objective is for this review to serve as a basis for the development of robust and reliable methods of societal impact measurement.
    Series
    Advances in information science
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.217-233
  6. Marx, W.; Bornmann, L.; Barth, A.; Leydesdorff, L.: Detecting the historical roots of research fields by reference publication year spectroscopy (RPYS) (2014) 0.00
    0.0010367181 = product of:
      0.0114039 = sum of:
        0.0058858884 = weight(_text_:in in 1238) [ClassicSimilarity], result of:
          0.0058858884 = score(doc=1238,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.21040362 = fieldWeight in 1238, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1238)
        0.0055180113 = product of:
          0.011036023 = sum of:
            0.011036023 = weight(_text_:science in 1238) [ClassicSimilarity], result of:
              0.011036023 = score(doc=1238,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20372227 = fieldWeight in 1238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1238)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    We introduce the quantitative method named "Reference Publication Year Spectroscopy" (RPYS). With this method one can determine the historical roots of research fields and quantify their impact on current research. RPYS is based on the analysis of the frequency with which references are cited in the publications of a specific research field in terms of the publication years of these cited references. The origins show up in the form of more or less pronounced peaks mostly caused by individual publications that are cited particularly frequently. In this study, we use research on graphene and on solar cells to illustrate how RPYS functions, and what results it can deliver.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.751-764
  7. Bornmann, L.; Wagner, C.; Leydesdorff, L.: BRICS countries and scientific excellence : a bibliometric analysis of most frequently cited papers (2015) 0.00
    9.748285E-4 = product of:
      0.010723113 = sum of:
        0.00514908 = weight(_text_:in in 2047) [ClassicSimilarity], result of:
          0.00514908 = score(doc=2047,freq=12.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18406484 = fieldWeight in 2047, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2047)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 2047) [ClassicSimilarity], result of:
              0.011148066 = score(doc=2047,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 2047, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    The BRICS countries (Brazil, Russia, India, China, and South Africa) are notable for their increasing participation in science and technology. The governments of these countries have been boosting their investments in research and development to become part of the group of nations doing research at a world-class level. This study investigates the development of the BRICS countries in the domain of top-cited papers (top 10% and 1% most frequently cited papers) between 1990 and 2010. To assess the extent to which these countries have become important players at the top level, we compare the BRICS countries with the top-performing countries worldwide. As the analyses of the (annual) growth rates show, with the exception of Russia, the BRICS countries have increased their output in terms of most frequently cited papers at a higher rate than the top-cited countries worldwide. By way of additional analysis, we generate coauthorship networks among authors of highly cited papers for 4 time points to view changes in BRICS participation (1995, 2000, 2005, and 2010). Here, the results show that all BRICS countries succeeded in becoming part of this network, whereby the Chinese collaboration activities focus on the US.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1507-1513
  8. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.00
    9.748285E-4 = product of:
      0.010723113 = sum of:
        0.00514908 = weight(_text_:in in 3432) [ClassicSimilarity], result of:
          0.00514908 = score(doc=3432,freq=12.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18406484 = fieldWeight in 3432, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3432)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 3432) [ClassicSimilarity], result of:
              0.011148066 = score(doc=3432,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 3432, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3432)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.653-659
  9. Bornmann, L.: How well does a university perform in comparison with its peers? : The use of odds, and odds ratios, for the comparison of institutional citation impact using the Leiden Rankings (2015) 0.00
    9.6503104E-4 = product of:
      0.010615341 = sum of:
        0.0050973296 = weight(_text_:in in 2340) [ClassicSimilarity], result of:
          0.0050973296 = score(doc=2340,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1822149 = fieldWeight in 2340, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2340)
        0.0055180113 = product of:
          0.011036023 = sum of:
            0.011036023 = weight(_text_:science in 2340) [ClassicSimilarity], result of:
              0.011036023 = score(doc=2340,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20372227 = fieldWeight in 2340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2340)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    This study presents the calculation of odds, and odds ratios, for the comparison of the citation impact of universities in the Leiden Ranking. Odds and odds ratios can be used to measure the performance difference between a selected university and competing institutions, or the average of selected competitors, in a relatively simple but clear way.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2711-2713
  10. Mutz, R.; Bornmann, L.; Daniel, H.-D.: Testing for the fairness and predictive validity of research funding decisions : a multilevel multiple imputation for missing data approach using ex-ante and ex-post peer evaluation data from the Austrian science fund (2015) 0.00
    9.516108E-4 = product of:
      0.010467718 = sum of:
        0.00364095 = weight(_text_:in in 2270) [ClassicSimilarity], result of:
          0.00364095 = score(doc=2270,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 2270, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2270)
        0.0068267686 = product of:
          0.013653537 = sum of:
            0.013653537 = weight(_text_:science in 2270) [ClassicSimilarity], result of:
              0.013653537 = score(doc=2270,freq=6.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.25204095 = fieldWeight in 2270, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2270)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante peer evaluation (EXANTE) of N?=?8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post peer evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in peer-review research. Without imputation, the predictive validity of EXANTE was low (r?=?.26) but underestimated due to verification bias, and with imputation it was r?=?.49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test peer review-based funding decisions for both validity and fairness in terms of potential and real biases.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.11, S.2321-2339
  11. Bornmann, L.; Moya Anegón, F.de: What proportion of excellent papers makes an institution one of the best worldwide? : Specifying thresholds for the interpretation of the results of the SCImago Institutions Ranking and the Leiden Ranking (2014) 0.00
    9.340436E-4 = product of:
      0.010274479 = sum of:
        0.004700446 = weight(_text_:in in 1235) [ClassicSimilarity], result of:
          0.004700446 = score(doc=1235,freq=10.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.16802745 = fieldWeight in 1235, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1235)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 1235) [ClassicSimilarity], result of:
              0.011148066 = score(doc=1235,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 1235, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1235)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    University rankings generally present users with the problem of placing the results given for an institution in context. Only a comparison with the performance of all other institutions makes it possible to say exactly where an institution stands. In order to interpret the results of the SCImago Institutions Ranking (based on Scopus data) and the Leiden Ranking (based on Web of Science data), in this study we offer thresholds with which it is possible to assess whether an institution belongs to the top 1%, top 5%, top 10%, top 25%, or top 50% of institutions in the world. The thresholds are based on the excellence rate or PPtop 10%. Both indicators measure the proportion of an institution's publications which belong to the 10% most frequently cited publications and are the most important indicators for measuring institutional impact. For example, while an institution must achieve a value of 24.63% in the Leiden Ranking 2013 to be considered one of the top 1% of institutions worldwide, the SCImago Institutions Ranking requires 30.2%.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.732-736
  12. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.00
    8.8861556E-4 = product of:
      0.009774771 = sum of:
        0.0050450475 = weight(_text_:in in 1109) [ClassicSimilarity], result of:
          0.0050450475 = score(doc=1109,freq=8.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18034597 = fieldWeight in 1109, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1109)
        0.0047297236 = product of:
          0.009459447 = sum of:
            0.009459447 = weight(_text_:science in 1109) [ClassicSimilarity], result of:
              0.009459447 = score(doc=1109,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.17461908 = fieldWeight in 1109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2310-2316
  13. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.00
    8.7906036E-4 = product of:
      0.009669663 = sum of:
        0.003363365 = weight(_text_:in in 1556) [ClassicSimilarity], result of:
          0.003363365 = score(doc=1556,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.120230645 = fieldWeight in 1556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1556)
        0.0063062985 = product of:
          0.012612597 = sum of:
            0.012612597 = weight(_text_:science in 1556) [ClassicSimilarity], result of:
              0.012612597 = score(doc=1556,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23282544 = fieldWeight in 1556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1556)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2146-2148
  14. Bornmann, L.; Bauer, J.: Which of the world's institutions employ the most highly cited researchers : an analysis of the data from highlycited.com (2015) 0.00
    8.7906036E-4 = product of:
      0.009669663 = sum of:
        0.003363365 = weight(_text_:in in 2223) [ClassicSimilarity], result of:
          0.003363365 = score(doc=2223,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.120230645 = fieldWeight in 2223, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2223)
        0.0063062985 = product of:
          0.012612597 = sum of:
            0.012612597 = weight(_text_:science in 2223) [ClassicSimilarity], result of:
              0.012612597 = score(doc=2223,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.23282544 = fieldWeight in 2223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2223)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    In 2014, Thomson Reuters published a list of the most highly cited researchers worldwide (highlycited.com). Because the data are freely available for downloading and include the names of the researchers' institutions, we produced a ranking of the institutions on the basis of the number of highly cited researchers per institution. This ranking is intended to be a helpful amendment of other available institutional rankings.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2146-2148
  15. Bornmann, L.: How much does the expected number of citations for a publication change if it contains the address of a specific scientific institute? : a new approach for the analysis of citation data on the institutional level based on regression models (2016) 0.00
    8.3772576E-4 = product of:
      0.009214983 = sum of:
        0.00364095 = weight(_text_:in in 3095) [ClassicSimilarity], result of:
          0.00364095 = score(doc=3095,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1301535 = fieldWeight in 3095, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3095)
        0.005574033 = product of:
          0.011148066 = sum of:
            0.011148066 = weight(_text_:science in 3095) [ClassicSimilarity], result of:
              0.011148066 = score(doc=3095,freq=4.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20579056 = fieldWeight in 3095, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3095)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Citation data for institutes are generally provided as numbers of citations or as relative citation rates (as, for example, in the Leiden Ranking). These numbers can then be compared between the institutes. This study aims to present a new approach for the evaluation of citation data at the institutional level, based on regression models. As example data, the study includes all articles and reviews from the Web of Science for the publication year 2003 (n?=?886,416 papers). The study is based on an in-house database of the Max Planck Society. The study investigates how much the expected number of citations for a publication changes if it contains the address of an institute. The calculation of the expected values allows, on the one hand, investigating how the citation impact of the papers of an institute appears in comparison with the total of all papers. On the other hand, the expected values for several institutes can be compared with one another or with a set of randomly selected publications. Besides the institutes, the regression models include factors which can be assumed to have a general influence on citation counts (e.g., the number of authors).
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.9, S.2274-2282
  16. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.00
    8.271694E-4 = product of:
      0.009098863 = sum of:
        0.0043691397 = weight(_text_:in in 288) [ClassicSimilarity], result of:
          0.0043691397 = score(doc=288,freq=6.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.1561842 = fieldWeight in 288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=288)
        0.0047297236 = product of:
          0.009459447 = sum of:
            0.009459447 = weight(_text_:science in 288) [ClassicSimilarity], result of:
              0.009459447 = score(doc=288,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.17461908 = fieldWeight in 288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=288)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1442-1458
  17. Ye, F.Y.; Bornmann, L.: "Smart girls" versus "sleeping beauties" in the sciences : the identification of instant and delayed recognition by using the citation angle (2018) 0.00
    8.2641066E-4 = product of:
      0.009090517 = sum of:
        0.00514908 = weight(_text_:in in 2160) [ClassicSimilarity], result of:
          0.00514908 = score(doc=2160,freq=12.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.18406484 = fieldWeight in 2160, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2160)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 2160) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=2160,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 2160, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2160)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called "sleeping beauties," SBs) or have presented single publications as cases of SBs. Most recently, Ke, Ferrara, Radicchi, and Flammini (2015, Proceedings of the National Academy of Sciences of the USA, 112(24), 7426-7431) proposed the so called "beauty coefficient" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term smart girl (SG) is suggested to differentiate instant credit or "flashes in the pan" from SBs. Although SG and SB are qualitatively defined, the dynamic citation angle ß is introduced in this study as a simple way for identifying SGs and SBs quantitatively - complementing the beauty coefficient B. The citation angles for all articles from 1980 (n?=?166,870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.359-367
  18. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Multilevel-statistical reformulation of citation-based university rankings : the Leiden ranking 2011/2012 (2013) 0.00
    7.856257E-4 = product of:
      0.008641883 = sum of:
        0.004700446 = weight(_text_:in in 1007) [ClassicSimilarity], result of:
          0.004700446 = score(doc=1007,freq=10.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.16802745 = fieldWeight in 1007, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1007)
        0.0039414368 = product of:
          0.0078828735 = sum of:
            0.0078828735 = weight(_text_:science in 1007) [ClassicSimilarity], result of:
              0.0078828735 = score(doc=1007,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.1455159 = fieldWeight in 1007, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1007)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    Since the 1990s, with the heightened competition and the strong growth of the international higher education market, an increasing number of rankings have been created that measure the scientific performance of an institution based on data. The Leiden Ranking 2011/2012 (LR) was published early in 2012. Starting from Goldstein and Spiegelhalter's (1996) recommendations for conducting quantitative comparisons among institutions, in this study we undertook a reformulation of the LR by means of multilevel regression models. First, with our models we replicated the ranking results; second, the reanalysis of the LR data showed that only 5% of the PPtop10% total variation is attributable to differences between universities. Beyond that, about 80% of the variation between universities can be explained by differences among countries. If covariates are included in the model the differences among most of the universities become meaningless. Our findings have implications for conducting university rankings in general and for the LR in particular. For example, with Goldstein-adjusted confidence intervals, it is possible to interpret the significance of differences among universities meaningfully: Rank differences among universities should be interpreted as meaningful only if their confidence intervals do not overlap.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1649-1658
  19. Bornmann, L.; Marx, W.: ¬The wisdom of citing scientists (2014) 0.00
    7.691778E-4 = product of:
      0.008460956 = sum of:
        0.0029429442 = weight(_text_:in in 1293) [ClassicSimilarity], result of:
          0.0029429442 = score(doc=1293,freq=2.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.10520181 = fieldWeight in 1293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1293)
        0.0055180113 = product of:
          0.011036023 = sum of:
            0.011036023 = weight(_text_:science in 1293) [ClassicSimilarity], result of:
              0.011036023 = score(doc=1293,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.20372227 = fieldWeight in 1293, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1293)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be considered to have some credibility. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlates with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say-strictly speaking-that it was obviously of little use for further research. Using a comparison with "like" data, we try to determine that cited reference analysis allows for a more meaningful analysis of bibliometric data than times-cited analysis.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.6, S.1288-1292
  20. Leydesdorff, L.; Bornmann, L.; Mutz, R.; Opthof, T.: Turning the tables on citation analysis one more time : principles for comparing sets of documents (2011) 0.00
    7.542828E-4 = product of:
      0.008297111 = sum of:
        0.0035673876 = weight(_text_:in in 4485) [ClassicSimilarity], result of:
          0.0035673876 = score(doc=4485,freq=4.0), product of:
            0.027974274 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02056547 = queryNorm
            0.12752387 = fieldWeight in 4485, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4485)
        0.0047297236 = product of:
          0.009459447 = sum of:
            0.009459447 = weight(_text_:science in 4485) [ClassicSimilarity], result of:
              0.009459447 = score(doc=4485,freq=2.0), product of:
                0.0541719 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.02056547 = queryNorm
                0.17461908 = fieldWeight in 4485, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4485)
          0.5 = coord(1/2)
      0.09090909 = coord(2/22)
    
    Abstract
    We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are-as a rule-highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven principal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset. We demonstrate that the proposed family indicators [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.7, S.1370-1381