Search (28 results, page 1 of 2)

  • × author_ss:"Bornmann, L."
  1. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.04
    0.04119421 = product of:
      0.08238842 = sum of:
        0.08238842 = sum of:
          0.03989153 = weight(_text_:research in 656) [ClassicSimilarity], result of:
            0.03989153 = score(doc=656,freq=4.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.2674672 = fieldWeight in 656, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
          0.042496894 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
            0.042496894 = score(doc=656,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.23214069 = fieldWeight in 656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
      0.5 = coord(1/2)
    
    Abstract
    According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance.
    Date
    22. 3.2013 19:44:17
  2. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.04
    0.035352234 = product of:
      0.07070447 = sum of:
        0.07070447 = sum of:
          0.028207572 = weight(_text_:research in 4681) [ClassicSimilarity], result of:
            0.028207572 = score(doc=4681,freq=2.0), product of:
              0.1491455 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.05227703 = queryNorm
              0.18912788 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
          0.042496894 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
            0.042496894 = score(doc=4681,freq=2.0), product of:
              0.18306525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05227703 = queryNorm
              0.23214069 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
      0.5 = coord(1/2)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
  3. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.021248447 = product of:
      0.042496894 = sum of:
        0.042496894 = product of:
          0.08499379 = sum of:
            0.08499379 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.08499379 = score(doc=1239,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
  4. Bornmann, L.: What is societal impact of research and how can it be assessed? : a literature survey (2013) 0.02
    0.021155678 = product of:
      0.042311355 = sum of:
        0.042311355 = product of:
          0.08462271 = sum of:
            0.08462271 = weight(_text_:research in 606) [ClassicSimilarity], result of:
              0.08462271 = score(doc=606,freq=18.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.5673836 = fieldWeight in 606, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=606)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Since the 1990s, the scope of research evaluations becomes broader as the societal products (outputs), societal use (societal references), and societal benefits (changes in society) of research come into scope. Society can reap the benefits of successful research studies only if the results are converted into marketable and consumable products (e.g., medicaments, diagnostic tools, machines, and devices) or services. A series of different names have been introduced which refer to the societal impact of research: third stream activities, societal benefits, societal quality, usefulness, public values, knowledge transfer, and societal relevance. What most of these names are concerned with is the assessment of social, cultural, environmental, and economic returns (impact and effects) from results (research output) or products (research outcome) of publicly funded research. This review intends to present existing research on and practices employed in the assessment of societal impact in the form of a literature survey. The objective is for this review to serve as a basis for the development of robust and reliable methods of societal impact measurement.
  5. Bornmann, L.; Daniel, H.-D.: Multiple publication on a single research study: does it pay? : The influence of number of research articles on total citation counts in biomedicine (2007) 0.02
    0.018583369 = product of:
      0.037166737 = sum of:
        0.037166737 = product of:
          0.074333474 = sum of:
            0.074333474 = weight(_text_:research in 444) [ClassicSimilarity], result of:
              0.074333474 = score(doc=444,freq=20.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.4983957 = fieldWeight in 444, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=444)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scientists may seek to report a single definable body of research in more than one publication, that is, in repeated reports of the same work or in fractional reports, in order to disseminate their research as widely as possible in the scientific community. Up to now, however, it has not been examined whether this strategy of "multiple publication" in fact leads to greater reception of the research. In the present study, we investigate the influence of number of articles reporting the results of a single study on reception in the scientific community (total citation counts of an article on a single study). Our data set consists of 96 applicants for a research fellowship from the Boehringer Ingelheim Fonds (BIF), an international foundation for the promotion of basic research in biomedicine. The applicants reported to us all articles that they had published within the framework of their doctoral research projects. On this single project, the applicants had published from 1 to 16 articles (M = 4; Mdn = 3). The results of a regression model with an interaction term show that the practice of multiple publication of research study results does in fact lead to greater reception of the research (higher total citation counts) in the scientific community. However, reception is dependent upon length of article: the longer the article, the more total citation counts increase with the number of articles. Thus, it pays for scientists to practice multiple publication of study results in the form of sizable reports.
  6. Marx, W.; Bornmann, L.; Barth, A.; Leydesdorff, L.: Detecting the historical roots of research fields by reference publication year spectroscopy (RPYS) (2014) 0.02
    0.018396595 = product of:
      0.03679319 = sum of:
        0.03679319 = product of:
          0.07358638 = sum of:
            0.07358638 = weight(_text_:research in 1238) [ClassicSimilarity], result of:
              0.07358638 = score(doc=1238,freq=10.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.49338657 = fieldWeight in 1238, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1238)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We introduce the quantitative method named "Reference Publication Year Spectroscopy" (RPYS). With this method one can determine the historical roots of research fields and quantify their impact on current research. RPYS is based on the analysis of the frequency with which references are cited in the publications of a specific research field in terms of the publication years of these cited references. The origins show up in the form of more or less pronounced peaks mostly caused by individual publications that are cited particularly frequently. In this study, we use research on graphene and on solar cells to illustrate how RPYS functions, and what results it can deliver.
  7. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.0141656315 = product of:
      0.028331263 = sum of:
        0.028331263 = product of:
          0.056662526 = sum of:
            0.056662526 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.056662526 = score(doc=1431,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  8. Bornmann, L.; Leydesdorff, L.: Statistical tests and research assessments : a comment on Schneider (2012) (2013) 0.01
    0.014103786 = product of:
      0.028207572 = sum of:
        0.028207572 = product of:
          0.056415144 = sum of:
            0.056415144 = weight(_text_:research in 752) [ClassicSimilarity], result of:
              0.056415144 = score(doc=752,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.37825575 = fieldWeight in 752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.09375 = fieldNorm(doc=752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Mutz, R.; Bornmann, L.; Daniel, H.-D.: Testing for the fairness and predictive validity of research funding decisions : a multilevel multiple imputation for missing data approach using ex-ante and ex-post peer evaluation data from the Austrian science fund (2015) 0.01
    0.013140426 = product of:
      0.026280852 = sum of:
        0.026280852 = product of:
          0.052561704 = sum of:
            0.052561704 = weight(_text_:research in 2270) [ClassicSimilarity], result of:
              0.052561704 = score(doc=2270,freq=10.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.352419 = fieldWeight in 2270, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante peer evaluation (EXANTE) of N?=?8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post peer evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in peer-review research. Without imputation, the predictive validity of EXANTE was low (r?=?.26) but underestimated due to verification bias, and with imputation it was r?=?.49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test peer review-based funding decisions for both validity and fairness in terms of potential and real biases.
  10. Bornmann, L.: Complex tasks and simple solutions : the use of heuristics in the evaluation of research (2015) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 8911) [ClassicSimilarity], result of:
              0.047012623 = score(doc=8911,freq=2.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 8911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8911)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Bornmann, L.; Thor, A.; Marx, W.; Schier, H.: ¬The application of bibliometrics to research evaluation in the humanities and social sciences : an exploratory study using normalized Google Scholar data for the publications of a research institute (2016) 0.01
    0.011753156 = product of:
      0.023506312 = sum of:
        0.023506312 = product of:
          0.047012623 = sum of:
            0.047012623 = weight(_text_:research in 3160) [ClassicSimilarity], result of:
              0.047012623 = score(doc=3160,freq=8.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31521314 = fieldWeight in 3160, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3160)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. This study uses a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS). In order to use GS for a bibliometric study, we developed procedures for the normalization of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we calculated normalized scores for a subset of the publications based on data from the Web of Science (WoS) and Scopus. Even if scores calculated with the help of GS and the WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute's papers whose journals are covered in the WoS are cited at about an average rate (compared with the other papers in the journals).
  12. Bornmann, L.; Marx, W.: ¬The wisdom of citing scientists (2014) 0.01
    0.01163503 = product of:
      0.02327006 = sum of:
        0.02327006 = product of:
          0.04654012 = sum of:
            0.04654012 = weight(_text_:research in 1293) [ClassicSimilarity], result of:
              0.04654012 = score(doc=1293,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.31204507 = fieldWeight in 1293, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1293)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be considered to have some credibility. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlates with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say-strictly speaking-that it was obviously of little use for further research. Using a comparison with "like" data, we try to determine that cited reference analysis allows for a more meaningful analysis of bibliometric data than times-cited analysis.
  13. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.01
    0.009972882 = product of:
      0.019945765 = sum of:
        0.019945765 = product of:
          0.03989153 = sum of:
            0.03989153 = weight(_text_:research in 288) [ClassicSimilarity], result of:
              0.03989153 = score(doc=288,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2674672 = fieldWeight in 288, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=288)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
  14. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.01
    0.009972882 = product of:
      0.019945765 = sum of:
        0.019945765 = product of:
          0.03989153 = sum of:
            0.03989153 = weight(_text_:research in 1109) [ClassicSimilarity], result of:
              0.03989153 = score(doc=1109,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.2674672 = fieldWeight in 1109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
  15. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.01
    0.00885352 = product of:
      0.01770704 = sum of:
        0.01770704 = product of:
          0.03541408 = sum of:
            0.03541408 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.03541408 = score(doc=4186,freq=2.0), product of:
                0.18306525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05227703 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 12:51:07
  16. Bornmann, L.; Daniel, H.D.: What do citation counts measure? : a review of studies on citing behavior (2008) 0.01
    0.008310735 = product of:
      0.01662147 = sum of:
        0.01662147 = product of:
          0.03324294 = sum of:
            0.03324294 = weight(_text_:research in 1729) [ClassicSimilarity], result of:
              0.03324294 = score(doc=1729,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.22288933 = fieldWeight in 1729, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1729)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non-scientific, reasons. Design/methodology/approach - The review covers research published from the early 1960s up to mid-2005 (approximately 30 studies on citing behavior-reporting results in about 40 publications). Findings - The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non-scientific, factors that play a part in the decision to cite. However, the results of the studies must also be deemed scarcely reliable: the studies vary widely in design, and their results can hardly be replicated. Many of the studies have methodological weaknesses. Furthermore, there is evidence that the different motivations of citers are "not so different or 'randomly given' to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact". Originality/value - Given the increasing importance of evaluative bibliometrics in the world of scholarship, the question "What do citation counts measure?" is a particularly relevant and topical issue.
  17. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.01
    0.008310735 = product of:
      0.01662147 = sum of:
        0.01662147 = product of:
          0.03324294 = sum of:
            0.03324294 = weight(_text_:research in 4132) [ClassicSimilarity], result of:
              0.03324294 = score(doc=4132,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.22288933 = fieldWeight in 4132, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a comprehensive research project, we investigated the predictive validity of selection decisions and reviewers' ratings at the open access journal Atmospheric Chemistry and Physics (ACP). ACP is a high-impact journal publishing papers on the Earth's atmosphere and the underlying chemical and physical processes. Scientific journals have to deal with the following question concerning the predictive validity: Are in fact the "best" scientific works selected from the manuscripts submitted? In this study we examined whether selecting the "best" manuscripts means selecting papers that after publication show top citation performance as compared to other papers in this research area. First, we appraised the citation impact of later published manuscripts based on the percentile citedness rank classes of the population distribution (scaling in a specific subfield). Second, we analyzed the association between the decisions (n = 677 accepted or rejected, but published elsewhere manuscripts) or ratings (reviewers' ratings for n = 315 manuscripts), respectively, and the citation impact classes of the manuscripts. The results confirm the predictive validity of the ACP peer review system.
  18. Bornmann, L.; Marx, W.: ¬The Anna Karenina principle : a way of thinking about success in science (2012) 0.01
    0.008310735 = product of:
      0.01662147 = sum of:
        0.01662147 = product of:
          0.03324294 = sum of:
            0.03324294 = weight(_text_:research in 449) [ClassicSimilarity], result of:
              0.03324294 = score(doc=449,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.22288933 = fieldWeight in 449, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=449)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The first sentence of Leo Tolstoy's (1875-1877/2001) novel Anna Karenina is: "Happy families are all alike; every unhappy family is unhappy in its own way." Here, Tolstoy means that for a family to be happy, several key aspects must be given (e.g., good health of all family members, acceptable financial security, and mutual affection). If there is a deficiency in any one or more of these key aspects, the family will be unhappy. In this article, we introduce the Anna Karenina principle as a way of thinking about success in science in three central areas in (modern) science: (a) peer review of research grant proposals and manuscripts (money and journal space as scarce resources), (b) citation of publications (reception as a scarce resource), and (c) new scientific discoveries (recognition as a scarce resource). If resources are scarce at the highly competitive research front (journal space, funds, reception, and recognition), there can be success only when several key prerequisites for the allocation of the resources are fulfilled. If any one of these prerequisites is not fulfilled, the grant proposal, manuscript submission, the published paper, or the discovery will not be successful.
  19. Bornmann, L.; Wagner, C.; Leydesdorff, L.: BRICS countries and scientific excellence : a bibliometric analysis of most frequently cited papers (2015) 0.01
    0.008310735 = product of:
      0.01662147 = sum of:
        0.01662147 = product of:
          0.03324294 = sum of:
            0.03324294 = weight(_text_:research in 2047) [ClassicSimilarity], result of:
              0.03324294 = score(doc=2047,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.22288933 = fieldWeight in 2047, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The BRICS countries (Brazil, Russia, India, China, and South Africa) are notable for their increasing participation in science and technology. The governments of these countries have been boosting their investments in research and development to become part of the group of nations doing research at a world-class level. This study investigates the development of the BRICS countries in the domain of top-cited papers (top 10% and 1% most frequently cited papers) between 1990 and 2010. To assess the extent to which these countries have become important players at the top level, we compare the BRICS countries with the top-performing countries worldwide. As the analyses of the (annual) growth rates show, with the exception of Russia, the BRICS countries have increased their output in terms of most frequently cited papers at a higher rate than the top-cited countries worldwide. By way of additional analysis, we generate coauthorship networks among authors of highly cited papers for 4 time points to view changes in BRICS participation (1995, 2000, 2005, and 2010). Here, the results show that all BRICS countries succeeded in becoming part of this network, whereby the Chinese collaboration activities focus on the US.
  20. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.01
    0.008310735 = product of:
      0.01662147 = sum of:
        0.01662147 = product of:
          0.03324294 = sum of:
            0.03324294 = weight(_text_:research in 5225) [ClassicSimilarity], result of:
              0.03324294 = score(doc=5225,freq=4.0), product of:
                0.1491455 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.05227703 = queryNorm
                0.22288933 = fieldWeight in 5225, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5225)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.