Search (33 results, page 1 of 2)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.04
    0.03805932 = product of:
      0.095148295 = sum of:
        0.032633968 = weight(_text_:web in 4681) [ClassicSimilarity], result of:
          0.032633968 = score(doc=4681,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.21634221 = fieldWeight in 4681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.06251433 = sum of:
          0.024940113 = weight(_text_:research in 4681) [ClassicSimilarity], result of:
            0.024940113 = score(doc=4681,freq=2.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.18912788 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
          0.037574213 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
            0.037574213 = score(doc=4681,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.23214069 = fieldWeight in 4681, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4681)
      0.4 = coord(2/5)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45
  2. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.02
    0.023448585 = product of:
      0.05862146 = sum of:
        0.046151403 = weight(_text_:web in 2779) [ClassicSimilarity], result of:
          0.046151403 = score(doc=2779,freq=4.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.3059541 = fieldWeight in 2779, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
        0.012470056 = product of:
          0.024940113 = sum of:
            0.024940113 = weight(_text_:research in 2779) [ClassicSimilarity], result of:
              0.024940113 = score(doc=2779,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.18912788 = fieldWeight in 2779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2779)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
    Aid
    Web of Science
  3. Bornmann, L.; Thor, A.; Marx, W.; Schier, H.: ¬The application of bibliometrics to research evaluation in the humanities and social sciences : an exploratory study using normalized Google Scholar data for the publications of a research institute (2016) 0.02
    0.01919136 = product of:
      0.0479784 = sum of:
        0.027194975 = weight(_text_:web in 3160) [ClassicSimilarity], result of:
          0.027194975 = score(doc=3160,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 3160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3160)
        0.020783428 = product of:
          0.041566856 = sum of:
            0.041566856 = weight(_text_:research in 3160) [ClassicSimilarity], result of:
              0.041566856 = score(doc=3160,freq=8.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.31521314 = fieldWeight in 3160, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3160)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. This study uses a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS). In order to use GS for a bibliometric study, we developed procedures for the normalization of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we calculated normalized scores for a subset of the publications based on data from the Web of Science (WoS) and Scopus. Even if scores calculated with the help of GS and the WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute's papers whose journals are covered in the WoS are cited at about an average rate (compared with the other papers in the journals).
  4. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.02
    0.018041609 = product of:
      0.045104023 = sum of:
        0.032633968 = weight(_text_:web in 3230) [ClassicSimilarity], result of:
          0.032633968 = score(doc=3230,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.21634221 = fieldWeight in 3230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
        0.012470056 = product of:
          0.024940113 = sum of:
            0.024940113 = weight(_text_:research in 3230) [ClassicSimilarity], result of:
              0.024940113 = score(doc=3230,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.18912788 = fieldWeight in 3230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3230)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  5. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.02
    0.015034676 = product of:
      0.03758669 = sum of:
        0.027194975 = weight(_text_:web in 3432) [ClassicSimilarity], result of:
          0.027194975 = score(doc=3432,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 3432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3432)
        0.010391714 = product of:
          0.020783428 = sum of:
            0.020783428 = weight(_text_:research in 3432) [ClassicSimilarity], result of:
              0.020783428 = score(doc=3432,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.15760657 = fieldWeight in 3432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3432)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
  6. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.01
    0.014568972 = product of:
      0.07284486 = sum of:
        0.07284486 = sum of:
          0.035270646 = weight(_text_:research in 656) [ClassicSimilarity], result of:
            0.035270646 = score(doc=656,freq=4.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.2674672 = fieldWeight in 656, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
          0.037574213 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
            0.037574213 = score(doc=656,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.23214069 = fieldWeight in 656, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=656)
      0.2 = coord(1/5)
    
    Abstract
    According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance.
    Date
    22. 3.2013 19:44:17
  7. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.01
    0.007514843 = product of:
      0.037574213 = sum of:
        0.037574213 = product of:
          0.075148426 = sum of:
            0.075148426 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.075148426 = score(doc=1239,freq=2.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    18. 3.2014 19:13:22
  8. Bornmann, L.: What is societal impact of research and how can it be assessed? : a literature survey (2013) 0.01
    0.007482033 = product of:
      0.037410166 = sum of:
        0.037410166 = product of:
          0.07482033 = sum of:
            0.07482033 = weight(_text_:research in 606) [ClassicSimilarity], result of:
              0.07482033 = score(doc=606,freq=18.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.5673836 = fieldWeight in 606, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=606)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Since the 1990s, the scope of research evaluations becomes broader as the societal products (outputs), societal use (societal references), and societal benefits (changes in society) of research come into scope. Society can reap the benefits of successful research studies only if the results are converted into marketable and consumable products (e.g., medicaments, diagnostic tools, machines, and devices) or services. A series of different names have been introduced which refer to the societal impact of research: third stream activities, societal benefits, societal quality, usefulness, public values, knowledge transfer, and societal relevance. What most of these names are concerned with is the assessment of social, cultural, environmental, and economic returns (impact and effects) from results (research output) or products (research outcome) of publicly funded research. This review intends to present existing research on and practices employed in the assessment of societal impact in the form of a literature survey. The objective is for this review to serve as a basis for the development of robust and reliable methods of societal impact measurement.
  9. Bornmann, L.; Daniel, H.-D.: Multiple publication on a single research study: does it pay? : The influence of number of research articles on total citation counts in biomedicine (2007) 0.01
    0.0065722964 = product of:
      0.032861482 = sum of:
        0.032861482 = product of:
          0.065722965 = sum of:
            0.065722965 = weight(_text_:research in 444) [ClassicSimilarity], result of:
              0.065722965 = score(doc=444,freq=20.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.4983957 = fieldWeight in 444, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=444)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Scientists may seek to report a single definable body of research in more than one publication, that is, in repeated reports of the same work or in fractional reports, in order to disseminate their research as widely as possible in the scientific community. Up to now, however, it has not been examined whether this strategy of "multiple publication" in fact leads to greater reception of the research. In the present study, we investigate the influence of number of articles reporting the results of a single study on reception in the scientific community (total citation counts of an article on a single study). Our data set consists of 96 applicants for a research fellowship from the Boehringer Ingelheim Fonds (BIF), an international foundation for the promotion of basic research in biomedicine. The applicants reported to us all articles that they had published within the framework of their doctoral research projects. On this single project, the applicants had published from 1 to 16 articles (M = 4; Mdn = 3). The results of a regression model with an interaction term show that the practice of multiple publication of research study results does in fact lead to greater reception of the research (higher total citation counts) in the scientific community. However, reception is dependent upon length of article: the longer the article, the more total citation counts increase with the number of articles. Thus, it pays for scientists to practice multiple publication of study results in the form of sizable reports.
  10. Bornmann, L.; Leydesdorff, L.: Which cities produce more excellent papers than can be expected? : a new mapping approach, using Google Maps, based on statistical significance testing (2011) 0.01
    0.006526794 = product of:
      0.032633968 = sum of:
        0.032633968 = weight(_text_:web in 4767) [ClassicSimilarity], result of:
          0.032633968 = score(doc=4767,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.21634221 = fieldWeight in 4767, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4767)
      0.2 = coord(1/5)
    
    Abstract
    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were published more frequently than can be expected. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. Top performers in output are cities in which authors are located who publish a statistically significant higher number of highly cited papers than can be expected for these cities. As sample data for physics, chemistry, and psychology show, these cities do not necessarily have a high output of highly cited papers.
  11. Marx, W.; Bornmann, L.; Barth, A.; Leydesdorff, L.: Detecting the historical roots of research fields by reference publication year spectroscopy (RPYS) (2014) 0.01
    0.006506242 = product of:
      0.03253121 = sum of:
        0.03253121 = product of:
          0.06506242 = sum of:
            0.06506242 = weight(_text_:research in 1238) [ClassicSimilarity], result of:
              0.06506242 = score(doc=1238,freq=10.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.49338657 = fieldWeight in 1238, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1238)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    We introduce the quantitative method named "Reference Publication Year Spectroscopy" (RPYS). With this method one can determine the historical roots of research fields and quantify their impact on current research. RPYS is based on the analysis of the frequency with which references are cited in the publications of a specific research field in terms of the publication years of these cited references. The origins show up in the form of more or less pronounced peaks mostly caused by individual publications that are cited particularly frequently. In this study, we use research on graphene and on solar cells to illustrate how RPYS functions, and what results it can deliver.
  12. Bornmann, L.; Moya Anegón, F.de: What proportion of excellent papers makes an institution one of the best worldwide? : Specifying thresholds for the interpretation of the results of the SCImago Institutions Ranking and the Leiden Ranking (2014) 0.01
    0.005438995 = product of:
      0.027194975 = sum of:
        0.027194975 = weight(_text_:web in 1235) [ClassicSimilarity], result of:
          0.027194975 = score(doc=1235,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 1235, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1235)
      0.2 = coord(1/5)
    
    Abstract
    University rankings generally present users with the problem of placing the results given for an institution in context. Only a comparison with the performance of all other institutions makes it possible to say exactly where an institution stands. In order to interpret the results of the SCImago Institutions Ranking (based on Scopus data) and the Leiden Ranking (based on Web of Science data), in this study we offer thresholds with which it is possible to assess whether an institution belongs to the top 1%, top 5%, top 10%, top 25%, or top 50% of institutions in the world. The thresholds are based on the excellence rate or PPtop 10%. Both indicators measure the proportion of an institution's publications which belong to the 10% most frequently cited publications and are the most important indicators for measuring institutional impact. For example, while an institution must achieve a value of 24.63% in the Leiden Ranking 2013 to be considered one of the top 1% of institutions worldwide, the SCImago Institutions Ranking requires 30.2%.
  13. Bornmann, L.; Mutz, R.: Growth rates of modern science : a bibliometric analysis based on the number of publications and cited references (2015) 0.01
    0.005438995 = product of:
      0.027194975 = sum of:
        0.027194975 = weight(_text_:web in 2261) [ClassicSimilarity], result of:
          0.027194975 = score(doc=2261,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 2261, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2261)
      0.2 = coord(1/5)
    
    Abstract
    Many studies (in information science) have looked at the growth of science. In this study, we reexamine the question of the growth of science. To do this we (a) use current data up to publication year 2012 and (b) analyze the data across all disciplines and also separately for the natural sciences and for the medical and health sciences. Furthermore, the data were analyzed with an advanced statistical technique-segmented regression analysis-which can identify specific segments with similar growth rates in the history of science. The study is based on two different sets of bibliometric data: (a) the number of publications held as source items in the Web of Science (WoS, Thomson Reuters) per publication year and (b) the number of cited references in the publications of the source items per cited reference year. We looked at the rate at which science has grown since the mid-1600s. In our analysis of cited references we identified three essential growth phases in the development of science, which each led to growth rates tripling in comparison with the previous phase: from less than 1% up to the middle of the 18th century, to 2 to 3% up to the period between the two world wars, and 8 to 9% to 2010.
  14. Bornmann, L.: How much does the expected number of citations for a publication change if it contains the address of a specific scientific institute? : a new approach for the analysis of citation data on the institutional level based on regression models (2016) 0.01
    0.005438995 = product of:
      0.027194975 = sum of:
        0.027194975 = weight(_text_:web in 3095) [ClassicSimilarity], result of:
          0.027194975 = score(doc=3095,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 3095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3095)
      0.2 = coord(1/5)
    
    Abstract
    Citation data for institutes are generally provided as numbers of citations or as relative citation rates (as, for example, in the Leiden Ranking). These numbers can then be compared between the institutes. This study aims to present a new approach for the evaluation of citation data at the institutional level, based on regression models. As example data, the study includes all articles and reviews from the Web of Science for the publication year 2003 (n?=?886,416 papers). The study is based on an in-house database of the Max Planck Society. The study investigates how much the expected number of citations for a publication changes if it contains the address of an institute. The calculation of the expected values allows, on the one hand, investigating how the citation impact of the papers of an institute appears in comparison with the total of all papers. On the other hand, the expected values for several institutes can be compared with one another or with a set of randomly selected publications. Besides the institutes, the regression models include factors which can be assumed to have a general influence on citation counts (e.g., the number of authors).
  15. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.01
    0.005438995 = product of:
      0.027194975 = sum of:
        0.027194975 = weight(_text_:web in 3231) [ClassicSimilarity], result of:
          0.027194975 = score(doc=3231,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 3231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
      0.2 = coord(1/5)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
  16. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.005009895 = product of:
      0.025049476 = sum of:
        0.025049476 = product of:
          0.050098952 = sum of:
            0.050098952 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.050098952 = score(doc=1431,freq=2.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.2014 17:05:18
  17. Bornmann, L.; Leydesdorff, L.: Statistical tests and research assessments : a comment on Schneider (2012) (2013) 0.00
    0.0049880226 = product of:
      0.024940113 = sum of:
        0.024940113 = product of:
          0.049880225 = sum of:
            0.049880225 = weight(_text_:research in 752) [ClassicSimilarity], result of:
              0.049880225 = score(doc=752,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.37825575 = fieldWeight in 752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.09375 = fieldNorm(doc=752)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  18. Mutz, R.; Bornmann, L.; Daniel, H.-D.: Testing for the fairness and predictive validity of research funding decisions : a multilevel multiple imputation for missing data approach using ex-ante and ex-post peer evaluation data from the Austrian science fund (2015) 0.00
    0.004647316 = product of:
      0.023236578 = sum of:
        0.023236578 = product of:
          0.046473157 = sum of:
            0.046473157 = weight(_text_:research in 2270) [ClassicSimilarity], result of:
              0.046473157 = score(doc=2270,freq=10.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.352419 = fieldWeight in 2270, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2270)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante peer evaluation (EXANTE) of N?=?8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post peer evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in peer-review research. Without imputation, the predictive validity of EXANTE was low (r?=?.26) but underestimated due to verification bias, and with imputation it was r?=?.49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test peer review-based funding decisions for both validity and fairness in terms of potential and real biases.
  19. Bornmann, L.: Complex tasks and simple solutions : the use of heuristics in the evaluation of research (2015) 0.00
    0.004156686 = product of:
      0.020783428 = sum of:
        0.020783428 = product of:
          0.041566856 = sum of:
            0.041566856 = weight(_text_:research in 8911) [ClassicSimilarity], result of:
              0.041566856 = score(doc=8911,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.31521314 = fieldWeight in 8911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8911)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
  20. Bornmann, L.; Marx, W.: ¬The wisdom of citing scientists (2014) 0.00
    0.0041149086 = product of:
      0.020574544 = sum of:
        0.020574544 = product of:
          0.041149087 = sum of:
            0.041149087 = weight(_text_:research in 1293) [ClassicSimilarity], result of:
              0.041149087 = score(doc=1293,freq=4.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.31204507 = fieldWeight in 1293, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1293)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This Brief Communication discusses the benefits of citation analysis in research evaluation based on Galton's "Wisdom of Crowds" (1907). Citations are based on the assessment of many which is why they can be considered to have some credibility. However, we show that citations are incomplete assessments and that one cannot assume that a high number of citations correlates with a high level of usefulness. Only when one knows that a rarely cited paper has been widely read is it possible to say-strictly speaking-that it was obviously of little use for further research. Using a comparison with "like" data, we try to determine that cited reference analysis allows for a more meaningful analysis of bibliometric data than times-cited analysis.