Search (29 results, page 1 of 2)

  • × author_ss:"Bornmann, L."
  1. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.06
    0.056699496 = product of:
      0.08504924 = sum of:
        0.057559717 = weight(_text_:based in 1431) [ClassicSimilarity], result of:
          0.057559717 = score(doc=1431,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.37662423 = fieldWeight in 1431, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=1431)
        0.027489524 = product of:
          0.05497905 = sum of:
            0.05497905 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.05497905 = score(doc=1431,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
  2. Bornmann, L.; Leydesdorff, L.: Which cities produce more excellent papers than can be expected? : a new mapping approach, using Google Maps, based on statistical significance testing (2011) 0.02
    0.017623993 = product of:
      0.052871976 = sum of:
        0.052871976 = weight(_text_:based in 4767) [ClassicSimilarity], result of:
          0.052871976 = score(doc=4767,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.34595144 = fieldWeight in 4767, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4767)
      0.33333334 = coord(1/3)
    
    Abstract
    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were published more frequently than can be expected. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. Top performers in output are cities in which authors are located who publish a statistically significant higher number of highly cited papers than can be expected for these cities. As sample data for physics, chemistry, and psychology show, these cities do not necessarily have a high output of highly cited papers.
  3. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.02
    0.017623993 = product of:
      0.052871976 = sum of:
        0.052871976 = weight(_text_:based in 2779) [ClassicSimilarity], result of:
          0.052871976 = score(doc=2779,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.34595144 = fieldWeight in 2779, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
      0.33333334 = coord(1/3)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
  4. Bornmann, L.; Marx, W.: ¬The wisdom of citing scientists (2014) 0.02
    0.016788252 = product of:
      0.050364755 = sum of:
        0.050364755 = weight(_text_:based in 1293) [ClassicSimilarity], result of:
          0.050364755 = score(doc=1293,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.3295462 = fieldWeight in 1293, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1293)
      0.33333334 = coord(1/3)
    
    Abstract
    This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be considered to have some credibility. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlates with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say-strictly speaking-that it was obviously of little use for further research. Using a comparison with "like" data, we try to determine that cited reference analysis allows for a more meaningful analysis of bibliometric data than times-cited analysis.
  5. Bornmann, L.; Moya Anegón, F.de: What proportion of excellent papers makes an institution one of the best worldwide? : Specifying thresholds for the interpretation of the results of the SCImago Institutions Ranking and the Leiden Ranking (2014) 0.01
    0.014686662 = product of:
      0.044059984 = sum of:
        0.044059984 = weight(_text_:based in 1235) [ClassicSimilarity], result of:
          0.044059984 = score(doc=1235,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28829288 = fieldWeight in 1235, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1235)
      0.33333334 = coord(1/3)
    
    Abstract
    University rankings generally present users with the problem of placing the results given for an institution in context. Only a comparison with the performance of all other institutions makes it possible to say exactly where an institution stands. In order to interpret the results of the SCImago Institutions Ranking (based on Scopus data) and the Leiden Ranking (based on Web of Science data), in this study we offer thresholds with which it is possible to assess whether an institution belongs to the top 1%, top 5%, top 10%, top 25%, or top 50% of institutions in the world. The thresholds are based on the excellence rate or PPtop 10%. Both indicators measure the proportion of an institution's publications which belong to the 10% most frequently cited publications and are the most important indicators for measuring institutional impact. For example, while an institution must achieve a value of 24.63% in the Leiden Ranking 2013 to be considered one of the top 1% of institutions worldwide, the SCImago Institutions Ranking requires 30.2%.
  6. Bornmann, L.: How much does the expected number of citations for a publication change if it contains the address of a specific scientific institute? : a new approach for the analysis of citation data on the institutional level based on regression models (2016) 0.01
    0.014686662 = product of:
      0.044059984 = sum of:
        0.044059984 = weight(_text_:based in 3095) [ClassicSimilarity], result of:
          0.044059984 = score(doc=3095,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28829288 = fieldWeight in 3095, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3095)
      0.33333334 = coord(1/3)
    
    Abstract
    Citation data for institutes are generally provided as numbers of citations or as relative citation rates (as, for example, in the Leiden Ranking). These numbers can then be compared between the institutes. This study aims to present a new approach for the evaluation of citation data at the institutional level, based on regression models. As example data, the study includes all articles and reviews from the Web of Science for the publication year 2003 (n?=?886,416 papers). The study is based on an in-house database of the Max Planck Society. The study investigates how much the expected number of citations for a publication changes if it contains the address of an institute. The calculation of the expected values allows, on the one hand, investigating how the citation impact of the papers of an institute appears in comparison with the total of all papers. On the other hand, the expected values for several institutes can be compared with one another or with a set of randomly selected publications. Besides the institutes, the regression models include factors which can be assumed to have a general influence on citation counts (e.g., the number of authors).
  7. Bornmann, L.: Is collaboration among scientists related to the citation impact of papers because their quality increases with collaboration? : an analysis based on data from F1000Prime and normalized citation scores (2017) 0.01
    0.014686662 = product of:
      0.044059984 = sum of:
        0.044059984 = weight(_text_:based in 3539) [ClassicSimilarity], result of:
          0.044059984 = score(doc=3539,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28829288 = fieldWeight in 3539, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3539)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years, the relationship of collaboration among scientists and the citation impact of papers have been frequently investigated. Most of the studies show that the two variables are closely related: An increasing collaboration activity (measured in terms of number of authors, number of affiliations, and number of countries) is associated with an increased citation impact. However, it is not clear whether the increased citation impact is based on the higher quality of papers that profit from more than one scientist giving expert input or other (citation-specific) factors. Thus, the current study addresses this question by using two comprehensive data sets with publications (in the biomedical area) including quality assessments by experts (F1000Prime member scores) and citation data for the publications. The study is based on more than 15,000 papers. Robust regression models are used to investigate the relationship between number of authors, number of affiliations, and number of countries, respectively, and citation impact-controlling for the papers' quality (measured by F1000Prime expert ratings). The results point out that the effect of collaboration activities on impact is largely independent of the papers' quality. The citation advantage is apparently not quality related; citation-specific factors (e.g., self-citations) seem to be important here.
  8. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.01
    0.014686662 = product of:
      0.044059984 = sum of:
        0.044059984 = weight(_text_:based in 5225) [ClassicSimilarity], result of:
          0.044059984 = score(doc=5225,freq=6.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28829288 = fieldWeight in 5225, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5225)
      0.33333334 = coord(1/3)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
  9. Leydesdorff, L.; Bornmann, L.; Mutz, R.; Opthof, T.: Turning the tables on citation analysis one more time : principles for comparing sets of documents (2011) 0.01
    0.01438993 = product of:
      0.04316979 = sum of:
        0.04316979 = weight(_text_:based in 4485) [ClassicSimilarity], result of:
          0.04316979 = score(doc=4485,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28246817 = fieldWeight in 4485, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=4485)
      0.33333334 = coord(1/3)
    
    Abstract
    We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are-as a rule-highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact indicators according to three degrees of freedom: the selection of the reference sets, the evaluation criteria, and the choice of whether or not to define the publication sets as independent. Bibliometric data of seven principal investigators (PIs) of the Academic Medical Center of the University of Amsterdam are used as an exemplary dataset. We demonstrate that the proposed family indicators [R(6), R(100), R(6, k), R(100, k)] are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
  10. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.01
    0.01438993 = product of:
      0.04316979 = sum of:
        0.04316979 = weight(_text_:based in 3230) [ClassicSimilarity], result of:
          0.04316979 = score(doc=3230,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.28246817 = fieldWeight in 3230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.33333334 = coord(1/3)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  11. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.01
    0.013744762 = product of:
      0.041234285 = sum of:
        0.041234285 = product of:
          0.08246857 = sum of:
            0.08246857 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.08246857 = score(doc=1239,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    18. 3.2014 19:13:22
  12. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Multilevel-statistical reformulation of citation-based university rankings : the Leiden ranking 2011/2012 (2013) 0.01
    0.011991608 = product of:
      0.035974823 = sum of:
        0.035974823 = weight(_text_:based in 1007) [ClassicSimilarity], result of:
          0.035974823 = score(doc=1007,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 1007, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1007)
      0.33333334 = coord(1/3)
    
    Abstract
    Since the 1990s, with the heightened competition and the strong growth of the international higher education market, an increasing number of rankings have been created that measure the scientific performance of an institution based on data. The Leiden Ranking 2011/2012 (LR) was published early in 2012. Starting from Goldstein and Spiegelhalter's (1996) recommendations for conducting quantitative comparisons among institutions, in this study we undertook a reformulation of the LR by means of multilevel regression models. First, with our models we replicated the ranking results; second, the reanalysis of the LR data showed that only 5% of the PPtop10% total variation is attributable to differences between universities. Beyond that, about 80% of the variation between universities can be explained by differences among countries. If covariates are included in the model the differences among most of the universities become meaningless. Our findings have implications for conducting university rankings in general and for the LR in particular. For example, with Goldstein-adjusted confidence intervals, it is possible to interpret the significance of differences among universities meaningfully: Rank differences among universities should be interpreted as meaningful only if their confidence intervals do not overlap.
  13. Bornmann, L.; Mutz, R.: Growth rates of modern science : a bibliometric analysis based on the number of publications and cited references (2015) 0.01
    0.011991608 = product of:
      0.035974823 = sum of:
        0.035974823 = weight(_text_:based in 2261) [ClassicSimilarity], result of:
          0.035974823 = score(doc=2261,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 2261, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2261)
      0.33333334 = coord(1/3)
    
    Abstract
    Many studies (in information science) have looked at the growth of science. In this study, we reexamine the question of the growth of science. To do this we (a) use current data up to publication year 2012 and (b) analyze the data across all disciplines and also separately for the natural sciences and for the medical and health sciences. Furthermore, the data were analyzed with an advanced statistical technique-segmented regression analysis-which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (a) the number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (b) the number of cited references in the publications of the source items per cited reference year. We looked at the rate at which science has grown since the mid-1600s. In our analysis of cited references we identified three essential growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars, and 8 to 9% to 2010.
  14. Bornmann, L.; Ye, A.; Ye, F.: Identifying landmark publications in the long run using field-normalized citation data (2018) 0.01
    0.011991608 = product of:
      0.035974823 = sum of:
        0.035974823 = weight(_text_:based in 4196) [ClassicSimilarity], result of:
          0.035974823 = score(doc=4196,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 4196, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4196)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this paper is to propose an approach for identifying landmark papers in the long run. These publications reach a very high level of citation impact and are able to remain on this level across many citing years. In recent years, several studies have been published which deal with the citation history of publications and try to identify landmark publications. Design/methodology/approach In contrast to other studies published hitherto, this study is based on a broad data set with papers published between 1980 and 1990 for identifying the landmark papers. The authors analyzed the citation histories of about five million papers across 25 years. Findings The results of this study reveal that 1,013 papers (less than 0.02 percent) are "outstandingly cited" in the long run. The cluster analyses of the papers show that they received the high impact level very soon after publication and remained on this level over decades. Only a slight impact decline is visible over the years. Originality/value For practical reasons, approaches for identifying landmark papers should be as simple as possible. The approach proposed in this study is based on standard methods in bibliometrics.
  15. Marx, W.; Bornmann, L.; Barth, A.; Leydesdorff, L.: Detecting the historical roots of research fields by reference publication year spectroscopy (RPYS) (2014) 0.01
    0.011871087 = product of:
      0.03561326 = sum of:
        0.03561326 = weight(_text_:based in 1238) [ClassicSimilarity], result of:
          0.03561326 = score(doc=1238,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23302436 = fieldWeight in 1238, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1238)
      0.33333334 = coord(1/3)
    
    Abstract
    We introduce the quantitative method named "Reference Publication Year Spectroscopy" (RPYS). With this method one can determine the historical roots of research fields and quantify their impact on current research. RPYS is based on the analysis of the frequency with which references are cited in the publications of a specific research field in terms of the publication years of these cited references. The origins show up in the form of more or less pronounced peaks mostly caused by individual publications that are cited particularly frequently. In this study, we use research on graphene and on solar cells to illustrate how RPYS functions, and what results it can deliver.
  16. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.01
    0.010175217 = product of:
      0.03052565 = sum of:
        0.03052565 = weight(_text_:based in 1109) [ClassicSimilarity], result of:
          0.03052565 = score(doc=1109,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 1109, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=1109)
      0.33333334 = coord(1/3)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
  17. Bornmann, L.: Interrater reliability and convergent validity of F1000Prime peer review (2015) 0.01
    0.010175217 = product of:
      0.03052565 = sum of:
        0.03052565 = weight(_text_:based in 2328) [ClassicSimilarity], result of:
          0.03052565 = score(doc=2328,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 2328, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2328)
      0.33333334 = coord(1/3)
    
    Abstract
    Peer review is the backbone of modern science. F1000Prime is a postpublication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the interrater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on about 100,000 papers with recommendations from faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between faculty members. This result is in agreement with most other studies that have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the faculty members is significantly higher than expected. In addition, better recommendation scores are also associated with higher performing papers.
  18. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V.: ¬A new approach to the QS university ranking using the composite I-distance indicator : uncertainty and sensitivity analyses (2016) 0.01
    0.010175217 = product of:
      0.03052565 = sum of:
        0.03052565 = weight(_text_:based in 2500) [ClassicSimilarity], result of:
          0.03052565 = score(doc=2500,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 2500, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2500)
      0.33333334 = coord(1/3)
    
    Abstract
    Some major concerns of universities are to provide quality in higher education and enhance global competitiveness, thus ensuring a high global rank and an excellent performance evaluation. This article examines the Quacquarelli Symonds (QS) World University Ranking methodology, pointing to a drawback of using subjective, possibly biased, weightings to build a composite indicator (QS scores). We propose an alternative approach to creating QS scores, which is referred to as the composite I-distance indicator (CIDI) methodology. The main contribution is the proposal of a composite indicator weights correction based on the CIDI methodology. It leads to the improved stability and reduced uncertainty of the QS ranking system. The CIDI methodology is also applicable to other university rankings by proposing a specific statistical approach to creating a composite indicator.
  19. Bornmann, L.; Daniel, H.D.: What do citation counts measure? : a review of studies on citing behavior (2008) 0.01
    0.008479347 = product of:
      0.025438042 = sum of:
        0.025438042 = weight(_text_:based in 1729) [ClassicSimilarity], result of:
          0.025438042 = score(doc=1729,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.16644597 = fieldWeight in 1729, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1729)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non-scientific, reasons. Design/methodology/approach - The review covers research published from the early 1960s up to mid-2005 (approximately 30 studies on citing behavior-reporting results in about 40 publications). Findings - The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non-scientific, factors that play a part in the decision to cite. However, the results of the studies must also be deemed scarcely reliable: the studies vary widely in design, and their results can hardly be replicated. Many of the studies have methodological weaknesses. Furthermore, there is evidence that the different motivations of citers are "not so different or 'randomly given' to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact". Originality/value - Given the increasing importance of evaluative bibliometrics in the world of scholarship, the question "What do citation counts measure?" is a particularly relevant and topical issue.
  20. Bornmann, L.; Mutz, R.; Daniel, H.D.: Do we need the h index and its variants in addition to standard bibliometric measures? (2009) 0.01
    0.008479347 = product of:
      0.025438042 = sum of:
        0.025438042 = weight(_text_:based in 2861) [ClassicSimilarity], result of:
          0.025438042 = score(doc=2861,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.16644597 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.33333334 = coord(1/3)
    
    Abstract
    In this study, we investigate whether there is a need for the h index and its variants in addition to standard bibliometric measures (SBMs). Results from our recent study (L. Bornmann, R. Mutz, & H.-D. Daniel, 2008) have indicated that there are two types of indices: One type of indices (e.g., h index) describes the most productive core of a scientist's output and informs about the number of papers in the core. The other type of indices (e.g., a index) depicts the impact of the papers in the core. In evaluative bibliometric studies, the two dimensions quantity and quality of output are usually assessed using the SBMs number of publications (for the quantity dimension) and total citation counts (for the impact dimension). We additionally included the SBMs into the factor analysis. The results of the newly calculated analysis indicate that there is a high intercorrelation between number of publications and the indices that load substantially on the factor Quantity of the Productive Core as well as between total citation counts and the indices that load substantially on the factor Impact of the Productive Core. The high-loading indices and SBMs within one performance dimension could be called redundant in empirical application, as high intercorrelations between different indicators are a sign for measuring something similar (or the same). Based on our findings, we propose the use of any pair of indicators (one relating to the number of papers in a researcher's productive core and one relating to the impact of these core papers) as a meaningful approach for comparing scientists.