Search (53 results, page 2 of 3)

  • × author_ss:"Bornmann, L."
  • × year_i:[2010 TO 2020}
  1. Bornmann, L.: Interrater reliability and convergent validity of F1000Prime peer review (2015) 0.00
    0.004282867 = product of:
      0.021414334 = sum of:
        0.021414334 = weight(_text_:of in 2328) [ClassicSimilarity], result of:
          0.021414334 = score(doc=2328,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.32781258 = fieldWeight in 2328, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2328)
      0.2 = coord(1/5)
    
    Abstract
    Peer review is the backbone of modern science. F1000Prime is a postpublication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the interrater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on about 100,000 papers with recommendations from faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between faculty members. This result is in agreement with most other studies that have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the faculty members is significantly higher than expected. In addition, better recommendation scores are also associated with higher performing papers.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2415-2426
  2. Bornmann, L.: How well does a university perform in comparison with its peers? : The use of odds, and odds ratios, for the comparison of institutional citation impact using the Leiden Rankings (2015) 0.00
    0.0041805212 = product of:
      0.020902606 = sum of:
        0.020902606 = weight(_text_:of in 2340) [ClassicSimilarity], result of:
          0.020902606 = score(doc=2340,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31997898 = fieldWeight in 2340, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2340)
      0.2 = coord(1/5)
    
    Abstract
    This study presents the calculation of odds, and odds ratios, for the comparison of the citation impact of universities in the Leiden Ranking. Odds and odds ratios can be used to measure the performance difference between a selected university and competing institutions, or the average of selected competitors, in a relatively simple but clear way.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2711-2713
  3. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.00
    0.00406935 = product of:
      0.02034675 = sum of:
        0.02034675 = weight(_text_:of in 532) [ClassicSimilarity], result of:
          0.02034675 = score(doc=532,freq=26.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31146988 = fieldWeight in 532, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
      0.2 = coord(1/5)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.96-107
  4. Bornmann, L.; Wagner, C.; Leydesdorff, L.: BRICS countries and scientific excellence : a bibliometric analysis of most frequently cited papers (2015) 0.00
    0.00406935 = product of:
      0.02034675 = sum of:
        0.02034675 = weight(_text_:of in 2047) [ClassicSimilarity], result of:
          0.02034675 = score(doc=2047,freq=26.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31146988 = fieldWeight in 2047, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2047)
      0.2 = coord(1/5)
    
    Abstract
    The BRICS countries (Brazil, Russia, India, China, and South Africa) are notable for their increasing participation in science and technology. The governments of these countries have been boosting their investments in research and development to become part of the group of nations doing research at a world-class level. This study investigates the development of the BRICS countries in the domain of top-cited papers (top 10% and 1% most frequently cited papers) between 1990 and 2010. To assess the extent to which these countries have become important players at the top level, we compare the BRICS countries with the top-performing countries worldwide. As the analyses of the (annual) growth rates show, with the exception of Russia, the BRICS countries have increased their output in terms of most frequently cited papers at a higher rate than the top-cited countries worldwide. By way of additional analysis, we generate coauthorship networks among authors of highly cited papers for 4 time points to view changes in BRICS participation (1995, 2000, 2005, and 2010). Here, the results show that all BRICS countries succeeded in becoming part of this network, whereby the Chinese collaboration activities focus on the US.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1507-1513
  5. Bornmann, L.; Thor, A.; Marx, W.; Schier, H.: ¬The application of bibliometrics to research evaluation in the humanities and social sciences : an exploratory study using normalized Google Scholar data for the publications of a research institute (2016) 0.00
    0.00406935 = product of:
      0.02034675 = sum of:
        0.02034675 = weight(_text_:of in 3160) [ClassicSimilarity], result of:
          0.02034675 = score(doc=3160,freq=26.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.31146988 = fieldWeight in 3160, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3160)
      0.2 = coord(1/5)
    
    Abstract
    In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. This study uses a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS). In order to use GS for a bibliometric study, we developed procedures for the normalization of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we calculated normalized scores for a subset of the publications based on data from the Web of Science (WoS) and Scopus. Even if scores calculated with the help of GS and the WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute's papers whose journals are covered in the WoS are cited at about an average rate (compared with the other papers in the journals).
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.11, S.2778-2789
  6. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.00
    0.0040630843 = product of:
      0.02031542 = sum of:
        0.02031542 = weight(_text_:of in 2779) [ClassicSimilarity], result of:
          0.02031542 = score(doc=2779,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3109903 = fieldWeight in 2779, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
      0.2 = coord(1/5)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
    Aid
    Web of Science
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.707-714
  7. Bornmann, L.: Complex tasks and simple solutions : the use of heuristics in the evaluation of research (2015) 0.00
    0.003909705 = product of:
      0.019548526 = sum of:
        0.019548526 = weight(_text_:of in 8911) [ClassicSimilarity], result of:
          0.019548526 = score(doc=8911,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 8911, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=8911)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.8, S.1738-1739
  8. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Multilevel-statistical reformulation of citation-based university rankings : the Leiden ranking 2011/2012 (2013) 0.00
    0.003909705 = product of:
      0.019548526 = sum of:
        0.019548526 = weight(_text_:of in 1007) [ClassicSimilarity], result of:
          0.019548526 = score(doc=1007,freq=24.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2992506 = fieldWeight in 1007, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1007)
      0.2 = coord(1/5)
    
    Abstract
    Since the 1990s, with the heightened competition and the strong growth of the international higher education market, an increasing number of rankings have been created that measure the scientific performance of an institution based on data. The Leiden Ranking 2011/2012 (LR) was published early in 2012. Starting from Goldstein and Spiegelhalter's (1996) recommendations for conducting quantitative comparisons among institutions, in this study we undertook a reformulation of the LR by means of multilevel regression models. First, with our models we replicated the ranking results; second, the reanalysis of the LR data showed that only 5% of the PPtop10% total variation is attributable to differences between universities. Beyond that, about 80% of the variation between universities can be explained by differences among countries. If covariates are included in the model the differences among most of the universities become meaningless. Our findings have implications for conducting university rankings in general and for the LR in particular. For example, with Goldstein-adjusted confidence intervals, it is possible to interpret the significance of differences among universities meaningfully: Rank differences among universities should be interpreted as meaningful only if their confidence intervals do not overlap.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.8, S.1649-1658
  9. Bornmann, L.: On the function of university rankings (2014) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 1188) [ClassicSimilarity], result of:
          0.019153563 = score(doc=1188,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 1188, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1188)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.2, S.428-429
  10. Bornmann, L.; Bauer, J.; Haunschild, R.: Distribution of women and men among highly cited scientists (2015) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 2349) [ClassicSimilarity], result of:
          0.019153563 = score(doc=2349,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 2349, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2349)
      0.2 = coord(1/5)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2715-2716
  11. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.00
    0.0038307128 = product of:
      0.019153563 = sum of:
        0.019153563 = weight(_text_:of in 3230) [ClassicSimilarity], result of:
          0.019153563 = score(doc=3230,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2932045 = fieldWeight in 3230, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.2 = coord(1/5)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3064-3072
  12. Mutz, R.; Bornmann, L.; Daniel, H.-D.: Testing for the fairness and predictive validity of research funding decisions : a multilevel multiple imputation for missing data approach using ex-ante and ex-post peer evaluation data from the Austrian science fund (2015) 0.00
    0.0037432574 = product of:
      0.018716287 = sum of:
        0.018716287 = weight(_text_:of in 2270) [ClassicSimilarity], result of:
          0.018716287 = score(doc=2270,freq=22.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.28651062 = fieldWeight in 2270, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2270)
      0.2 = coord(1/5)
    
    Abstract
    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante peer evaluation (EXANTE) of N?=?8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post peer evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in peer-review research. Without imputation, the predictive validity of EXANTE was low (r?=?.26) but underestimated due to verification bias, and with imputation it was r?=?.49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test peer review-based funding decisions for both validity and fairness in terms of potential and real biases.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.11, S.2321-2339
  13. Bornmann, L.; Leydesdorff, L.: Which cities produce more excellent papers than can be expected? : a new mapping approach, using Google Maps, based on statistical significance testing (2011) 0.00
    0.003583304 = product of:
      0.01791652 = sum of:
        0.01791652 = weight(_text_:of in 4767) [ClassicSimilarity], result of:
          0.01791652 = score(doc=4767,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2742677 = fieldWeight in 4767, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4767)
      0.2 = coord(1/5)
    
    Abstract
    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were published more frequently than can be expected. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. Top performers in output are cities in which authors are located who publish a statistically significant higher number of highly cited papers than can be expected for these cities. As sample data for physics, chemistry, and psychology show, these cities do not necessarily have a high output of highly cited papers.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.10, S.1954-1962
  14. Bornmann, L.; Marx, W.: ¬The Anna Karenina principle : a way of thinking about success in science (2012) 0.00
    0.0035690558 = product of:
      0.017845279 = sum of:
        0.017845279 = weight(_text_:of in 449) [ClassicSimilarity], result of:
          0.017845279 = score(doc=449,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27317715 = fieldWeight in 449, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=449)
      0.2 = coord(1/5)
    
    Abstract
    The first sentence of Leo Tolstoy's (1875-1877/2001) novel Anna Karenina is: "Happy families are all alike; every unhappy family is unhappy in its own way." Here, Tolstoy means that for a family to be happy, several key aspects must be given (e.g., good health of all family members, acceptable financial security, and mutual affection). If there is a deficiency in any one or more of these key aspects, the family will be unhappy. In this article, we introduce the Anna Karenina principle as a way of thinking about success in science in three central areas in (modern) science: (a) peer review of research grant proposals and manuscripts (money and journal space as scarce resources), (b) citation of publications (reception as a scarce resource), and (c) new scientific discoveries (recognition as a scarce resource). If resources are scarce at the highly competitive research front (journal space, funds, reception, and recognition), there can be success only when several key prerequisites for the allocation of the resources are fulfilled. If any one of these prerequisites is not fulfilled, the grant proposal, manuscript submission, the published paper, or the discovery will not be successful.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.10, S.2037-2051
  15. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    0.0035690558 = product of:
      0.017845279 = sum of:
        0.017845279 = weight(_text_:of in 5225) [ClassicSimilarity], result of:
          0.017845279 = score(doc=5225,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.27317715 = fieldWeight in 5225, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5225)
      0.2 = coord(1/5)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.5, S.509-525
  16. Egghe, L.; Bornmann, L.: Fallout and miss in journal peer review (2013) 0.00
    0.0035331852 = product of:
      0.017665926 = sum of:
        0.017665926 = weight(_text_:of in 1759) [ClassicSimilarity], result of:
          0.017665926 = score(doc=1759,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.2704316 = fieldWeight in 1759, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1759)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The authors exploit the analogy between journal peer review and information retrieval in order to quantify some imperfections of journal peer review. Design/methodology/approach - The authors define fallout rate and missing rate in order to describe quantitatively the weak papers that were accepted and the strong papers that were missed, respectively. To assess the quality of manuscripts the authors use bibliometric measures. Findings - Fallout rate and missing rate are put in relation with the hitting rate and success rate. Conclusions are drawn on what fraction of weak papers will be accepted in order to have a certain fraction of strong accepted papers. Originality/value - The paper illustrates that these curves are new in peer review research when interpreted in the information retrieval terminology.
    Source
    Journal of documentation. 69(2013) no.3, S.411-416
  17. Ye, F.Y.; Bornmann, L.: "Smart girls" versus "sleeping beauties" in the sciences : the identification of instant and delayed recognition by using the citation angle (2018) 0.00
    0.0033859033 = product of:
      0.016929517 = sum of:
        0.016929517 = weight(_text_:of in 2160) [ClassicSimilarity], result of:
          0.016929517 = score(doc=2160,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25915858 = fieldWeight in 2160, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2160)
      0.2 = coord(1/5)
    
    Abstract
    In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called "sleeping beauties," SBs) or have presented single publications as cases of SBs. Most recently, Ke, Ferrara, Radicchi, and Flammini (2015, Proceedings of the National Academy of Sciences of the USA, 112(24), 7426-7431) proposed the so called "beauty coefficient" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term smart girl (SG) is suggested to differentiate instant credit or "flashes in the pan" from SBs. Although SG and SB are qualitatively defined, the dynamic citation angle ß is introduced in this study as a simple way for identifying SGs and SBs quantitatively - complementing the beauty coefficient B. The citation angles for all articles from 1980 (n?=?166,870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.359-367
  18. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.00
    0.0033859033 = product of:
      0.016929517 = sum of:
        0.016929517 = weight(_text_:of in 3231) [ClassicSimilarity], result of:
          0.016929517 = score(doc=3231,freq=18.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25915858 = fieldWeight in 3231, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
      0.2 = coord(1/5)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3095-3100
  19. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.00
    0.0033174944 = product of:
      0.016587472 = sum of:
        0.016587472 = weight(_text_:of in 288) [ClassicSimilarity], result of:
          0.016587472 = score(doc=288,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 288, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=288)
      0.2 = coord(1/5)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1442-1458
  20. Bornmann, L.: Nature's top 100 revisited (2015) 0.00
    0.0031922606 = product of:
      0.015961302 = sum of:
        0.015961302 = weight(_text_:of in 2351) [ClassicSimilarity], result of:
          0.015961302 = score(doc=2351,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 2351, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2351)
      0.2 = coord(1/5)
    
    Content
    Bezug: Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2714. Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23554/abstract.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2166