Search (60 results, page 3 of 3)

  • × author_ss:"Bornmann, L."
  • × language_ss:"e"
  1. Bornmann, L.; Daniel, H.-D.: Universality of citation distributions : a validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry (2009) 0.01
    0.006210416 = product of:
      0.024841664 = sum of:
        0.019324033 = weight(_text_:of in 2954) [ClassicSimilarity], result of:
          0.019324033 = score(doc=2954,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2992506 = fieldWeight in 2954, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2954)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 2954) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=2954,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 2954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2954)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In a recently published PNAS paper, Radicchi, Fortunato, and Castellano (2008) propose the relative indicator cf as an unbiased indicator for citation performance across disciplines (fields, subject areas). To calculate cf, the citation rate for a single paper is divided by the average number of citations for all papers in the discipline in which the single paper has been categorized. cf values are said to lead to a universality of discipline-specific citation distributions. Using a comprehensive dataset of an evaluation study on Angewandte Chemie International Edition (AC-IE), we tested the advantage of using this indicator in practical application at the micro level, as compared with (1) simple citation rates, and (2) z-scores, which have been used in psychological testing for many years for normalization of test scores. To calculate z-scores, the mean number of citations of the papers within a discipline is subtracted from the citation rate of a single paper, and the difference is then divided by the citations' standard deviation for a discipline. Our results indicate that z-scores are better suited than cf values to produce universality of discipline-specific citation distributions.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.8, S.1664-1670
  2. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Multilevel-statistical reformulation of citation-based university rankings : the Leiden ranking 2011/2012 (2013) 0.01
    0.006210416 = product of:
      0.024841664 = sum of:
        0.019324033 = weight(_text_:of in 1007) [ClassicSimilarity], result of:
          0.019324033 = score(doc=1007,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2992506 = fieldWeight in 1007, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1007)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 1007) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=1007,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 1007, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1007)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Since the 1990s, with the heightened competition and the strong growth of the international higher education market, an increasing number of rankings have been created that measure the scientific performance of an institution based on data. The Leiden Ranking 2011/2012 (LR) was published early in 2012. Starting from Goldstein and Spiegelhalter's (1996) recommendations for conducting quantitative comparisons among institutions, in this study we undertook a reformulation of the LR by means of multilevel regression models. First, with our models we replicated the ranking results; second, the reanalysis of the LR data showed that only 5% of the PPtop10% total variation is attributable to differences between universities. Beyond that, about 80% of the variation between universities can be explained by differences among countries. If covariates are included in the model the differences among most of the universities become meaningless. Our findings have implications for conducting university rankings in general and for the LR in particular. For example, with Goldstein-adjusted confidence intervals, it is possible to interpret the significance of differences among universities meaningfully: Rank differences among universities should be interpreted as meaningful only if their confidence intervals do not overlap.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1649-1658
  3. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.01
    0.0060789483 = product of:
      0.024315793 = sum of:
        0.014758972 = weight(_text_:of in 4132) [ClassicSimilarity], result of:
          0.014758972 = score(doc=4132,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.22855641 = fieldWeight in 4132, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4132)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 4132) [ClassicSimilarity], result of:
              0.01911364 = score(doc=4132,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 4132, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In a comprehensive research project, we investigated the predictive validity of selection decisions and reviewers' ratings at the open access journal Atmospheric Chemistry and Physics (ACP). ACP is a high-impact journal publishing papers on the Earth's atmosphere and the underlying chemical and physical processes. Scientific journals have to deal with the following question concerning the predictive validity: Are in fact the "best" scientific works selected from the manuscripts submitted? In this study we examined whether selecting the "best" manuscripts means selecting papers that after publication show top citation performance as compared to other papers in this research area. First, we appraised the citation impact of later published manuscripts based on the percentile citedness rank classes of the population distribution (scaling in a specific subfield). Second, we analyzed the association between the decisions (n = 677 accepted or rejected, but published elsewhere manuscripts) or ratings (reviewers' ratings for n = 315 manuscripts), respectively, and the citation impact classes of the manuscripts. The results confirm the predictive validity of the ACP peer review system.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.61-71
  4. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.01
    0.005895279 = product of:
      0.023581116 = sum of:
        0.015778005 = weight(_text_:of in 3432) [ClassicSimilarity], result of:
          0.015778005 = score(doc=3432,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 3432, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3432)
        0.007803111 = product of:
          0.015606222 = sum of:
            0.015606222 = weight(_text_:on in 3432) [ClassicSimilarity], result of:
              0.015606222 = score(doc=3432,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.1718293 = fieldWeight in 3432, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3432)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.653-659
  5. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.01
    0.0057545356 = product of:
      0.023018142 = sum of:
        0.016396983 = weight(_text_:of in 288) [ClassicSimilarity], result of:
          0.016396983 = score(doc=288,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25392252 = fieldWeight in 288, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=288)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 288) [ClassicSimilarity], result of:
              0.013242318 = score(doc=288,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=288)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1442-1458
  6. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V.: ¬A new approach to the QS university ranking using the composite I-distance indicator : uncertainty and sensitivity analyses (2016) 0.01
    0.0053973724 = product of:
      0.02158949 = sum of:
        0.014968331 = weight(_text_:of in 2500) [ClassicSimilarity], result of:
          0.014968331 = score(doc=2500,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23179851 = fieldWeight in 2500, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2500)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 2500) [ClassicSimilarity], result of:
              0.013242318 = score(doc=2500,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Some major concerns of universities are to provide quality in higher education and enhance global competitiveness, thus ensuring a high global rank and an excellent performance evaluation. This article examines the Quacquarelli Symonds (QS) World University Ranking methodology, pointing to a drawback of using subjective, possibly biased, weightings to build a composite indicator (QS scores). We propose an alternative approach to creating QS scores, which is referred to as the composite I-distance indicator (CIDI) methodology. The main contribution is the proposal of a composite indicator weights correction based on the CIDI methodology. It leads to the improved stability and reduced uncertainty of the QS ranking system. The CIDI methodology is also applicable to other university rankings by proposing a specific statistical approach to creating a composite indicator.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.200-211
  7. Bornmann, L.; Haunschild, R.: Relative Citation Ratio (RCR) : an empirical attempt to study a new field-normalized bibliometric indicator (2017) 0.01
    0.005312877 = product of:
      0.021251507 = sum of:
        0.013526822 = weight(_text_:of in 3541) [ClassicSimilarity], result of:
          0.013526822 = score(doc=3541,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20947541 = fieldWeight in 3541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3541)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 3541) [ClassicSimilarity], result of:
              0.01544937 = score(doc=3541,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 3541, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3541)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Hutchins, Yuan, Anderson, and Santangelo (2015) proposed the Relative Citation Ratio (RCR) as a new field-normalized impact indicator. This study investigates the RCR by correlating it on the level of single publications with established field-normalized indicators and assessments of the publications by peers. We find that the RCR correlates highly with established field-normalized indicators, but the correlation between RCR and peer assessments is only low to medium.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1064-1067
  8. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.00
    0.0030169634 = product of:
      0.024135707 = sum of:
        0.024135707 = weight(_text_:of in 1108) [ClassicSimilarity], result of:
          0.024135707 = score(doc=1108,freq=26.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.37376386 = fieldWeight in 1108, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1108)
      0.125 = coord(1/8)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2299-2309
  9. Bornmann, L.: ¬The reception of publications by scientists in the early days of modern science (2014) 0.00
    0.0028986046 = product of:
      0.023188837 = sum of:
        0.023188837 = weight(_text_:of in 1509) [ClassicSimilarity], result of:
          0.023188837 = score(doc=1509,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3591007 = fieldWeight in 1509, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1509)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.10, S.2160-2161
  10. Bornmann, L.; Marx, W.: Distributions instead of single numbers : percentiles and beam plots for the assessment of single researchers (2014) 0.00
    0.002761151 = product of:
      0.022089208 = sum of:
        0.022089208 = weight(_text_:of in 1190) [ClassicSimilarity], result of:
          0.022089208 = score(doc=1190,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34207192 = fieldWeight in 1190, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1190)
      0.125 = coord(1/8)
    
    Abstract
    Citations measure an aspect of scientific quality: the impact of publications (A.F.J. van Raan, 1996). Percentiles normalize the impact of papers with respect to their publication year and field without using the arithmetic average. They are suitable for visualizing the performance of a single scientist. Beam plots make it possible to present the distributions of percentiles in the different publication years combined with the medians from these percentiles within each year and across all years.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.1, S.206-208
  11. Bornmann, L.; Bauer, J.; Haunschild, R.: Distribution of women and men among highly cited scientists (2015) 0.00
    0.0023667007 = product of:
      0.018933605 = sum of:
        0.018933605 = weight(_text_:of in 2349) [ClassicSimilarity], result of:
          0.018933605 = score(doc=2349,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 2349, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2349)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2715-2716
  12. Mutz, R.; Bornmann, L.; Daniel, H.-D.: Testing for the fairness and predictive validity of research funding decisions : a multilevel multiple imputation for missing data approach using ex-ante and ex-post peer evaluation data from the Austrian science fund (2015) 0.00
    0.0023126688 = product of:
      0.01850135 = sum of:
        0.01850135 = weight(_text_:of in 2270) [ClassicSimilarity], result of:
          0.01850135 = score(doc=2270,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.28651062 = fieldWeight in 2270, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2270)
      0.125 = coord(1/8)
    
    Abstract
    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante peer evaluation (EXANTE) of N?=?8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post peer evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in peer-review research. Without imputation, the predictive validity of EXANTE was low (r?=?.26) but underestimated due to verification bias, and with imputation it was r?=?.49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test peer review-based funding decisions for both validity and fairness in terms of potential and real biases.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.11, S.2321-2339
  13. Bornmann, L.; Marx, W.: ¬The Anna Karenina principle : a way of thinking about success in science (2012) 0.00
    0.002205043 = product of:
      0.017640345 = sum of:
        0.017640345 = weight(_text_:of in 449) [ClassicSimilarity], result of:
          0.017640345 = score(doc=449,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27317715 = fieldWeight in 449, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=449)
      0.125 = coord(1/8)
    
    Abstract
    The first sentence of Leo Tolstoy's (1875-1877/2001) novel Anna Karenina is: "Happy families are all alike; every unhappy family is unhappy in its own way." Here, Tolstoy means that for a family to be happy, several key aspects must be given (e.g., good health of all family members, acceptable financial security, and mutual affection). If there is a deficiency in any one or more of these key aspects, the family will be unhappy. In this article, we introduce the Anna Karenina principle as a way of thinking about success in science in three central areas in (modern) science: (a) peer review of research grant proposals and manuscripts (money and journal space as scarce resources), (b) citation of publications (reception as a scarce resource), and (c) new scientific discoveries (recognition as a scarce resource). If resources are scarce at the highly competitive research front (journal space, funds, reception, and recognition), there can be success only when several key prerequisites for the allocation of the resources are fulfilled. If any one of these prerequisites is not fulfilled, the grant proposal, manuscript submission, the published paper, or the discovery will not be successful.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.10, S.2037-2051
  14. Ye, F.Y.; Bornmann, L.: "Smart girls" versus "sleeping beauties" in the sciences : the identification of instant and delayed recognition by using the citation angle (2018) 0.00
    0.0020918874 = product of:
      0.0167351 = sum of:
        0.0167351 = weight(_text_:of in 2160) [ClassicSimilarity], result of:
          0.0167351 = score(doc=2160,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25915858 = fieldWeight in 2160, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2160)
      0.125 = coord(1/8)
    
    Abstract
    In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called "sleeping beauties," SBs) or have presented single publications as cases of SBs. Most recently, Ke, Ferrara, Radicchi, and Flammini (2015, Proceedings of the National Academy of Sciences of the USA, 112(24), 7426-7431) proposed the so called "beauty coefficient" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term smart girl (SG) is suggested to differentiate instant credit or "flashes in the pan" from SBs. Although SG and SB are qualitatively defined, the dynamic citation angle ß is introduced in this study as a simple way for identifying SGs and SBs quantitatively - complementing the beauty coefficient B. The citation angles for all articles from 1980 (n?=?166,870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.359-367
  15. Bornmann, L.: Nature's top 100 revisited (2015) 0.00
    0.0019722506 = product of:
      0.015778005 = sum of:
        0.015778005 = weight(_text_:of in 2351) [ClassicSimilarity], result of:
          0.015778005 = score(doc=2351,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 2351, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2351)
      0.125 = coord(1/8)
    
    Content
    Bezug: Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2714. Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23554/abstract.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2166
  16. Bornmann, L.: Scientific peer review (2011) 0.00
    0.0019524286 = product of:
      0.015619429 = sum of:
        0.015619429 = weight(_text_:of in 1600) [ClassicSimilarity], result of:
          0.015619429 = score(doc=1600,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24188137 = fieldWeight in 1600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=1600)
      0.125 = coord(1/8)
    
    Source
    Annual review of information science and technology. 45(2011) no.1, S.197-245
  17. Bornmann, L.: Is there currently a scientific revolution in Scientometrics? (2014) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 1206) [ClassicSimilarity], result of:
          0.013388081 = score(doc=1206,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 1206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1206)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.3, S.647-648
  18. Bornmann, L.: What do altmetrics counts mean? : a plea for content analyses (2016) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 2858) [ClassicSimilarity], result of:
          0.013388081 = score(doc=2858,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 2858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2858)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.1016-1017
  19. Besselaar, P. van den; Wagner, C,; Bornmann, L.: Correct assumptions? (2016) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 3020) [ClassicSimilarity], result of:
          0.013388081 = score(doc=3020,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 3020, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=3020)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1779
  20. Leydesdorff, L.; Wagner, C,; Bornmann, L.: Replicability and the public/private divide (2016) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 3023) [ClassicSimilarity], result of:
          0.013388081 = score(doc=3023,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 3023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=3023)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1777-1778