Search (60 results, page 3 of 3)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    0.0031192217 = product of:
      0.009357665 = sum of:
        0.009357665 = product of:
          0.01871533 = sum of:
            0.01871533 = weight(_text_:of in 5225) [ClassicSimilarity], result of:
              0.01871533 = score(doc=5225,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27317715 = fieldWeight in 5225, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5225)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.5, S.509-525
  2. Egghe, L.; Bornmann, L.: Fallout and miss in journal peer review (2013) 0.00
    0.0030878722 = product of:
      0.009263616 = sum of:
        0.009263616 = product of:
          0.018527232 = sum of:
            0.018527232 = weight(_text_:of in 1759) [ClassicSimilarity], result of:
              0.018527232 = score(doc=1759,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2704316 = fieldWeight in 1759, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1759)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The authors exploit the analogy between journal peer review and information retrieval in order to quantify some imperfections of journal peer review. Design/methodology/approach - The authors define fallout rate and missing rate in order to describe quantitatively the weak papers that were accepted and the strong papers that were missed, respectively. To assess the quality of manuscripts the authors use bibliometric measures. Findings - Fallout rate and missing rate are put in relation with the hitting rate and success rate. Conclusions are drawn on what fraction of weak papers will be accepted in order to have a certain fraction of strong accepted papers. Originality/value - The paper illustrates that these curves are new in peer review research when interpreted in the information retrieval terminology.
    Source
    Journal of documentation. 69(2013) no.3, S.411-416
  3. Ye, F.Y.; Bornmann, L.: "Smart girls" versus "sleeping beauties" in the sciences : the identification of instant and delayed recognition by using the citation angle (2018) 0.00
    0.0029591531 = product of:
      0.008877459 = sum of:
        0.008877459 = product of:
          0.017754918 = sum of:
            0.017754918 = weight(_text_:of in 2160) [ClassicSimilarity], result of:
              0.017754918 = score(doc=2160,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.25915858 = fieldWeight in 2160, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2160)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years, a number of studies have introduced methods for identifying papers with delayed recognition (so called "sleeping beauties," SBs) or have presented single publications as cases of SBs. Most recently, Ke, Ferrara, Radicchi, and Flammini (2015, Proceedings of the National Academy of Sciences of the USA, 112(24), 7426-7431) proposed the so called "beauty coefficient" (denoted as B) to quantify how much a given paper can be considered as a paper with delayed recognition. In this study, the new term smart girl (SG) is suggested to differentiate instant credit or "flashes in the pan" from SBs. Although SG and SB are qualitatively defined, the dynamic citation angle ß is introduced in this study as a simple way for identifying SGs and SBs quantitatively - complementing the beauty coefficient B. The citation angles for all articles from 1980 (n?=?166,870) in natural sciences are calculated for identifying SGs and SBs and their extent. We reveal that about 3% of the articles are typical SGs and about 0.1% typical SBs. The potential advantages of the citation angle approach are explained.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.3, S.359-367
  4. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.00
    0.0029591531 = product of:
      0.008877459 = sum of:
        0.008877459 = product of:
          0.017754918 = sum of:
            0.017754918 = weight(_text_:of in 3231) [ClassicSimilarity], result of:
              0.017754918 = score(doc=3231,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.25915858 = fieldWeight in 3231, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3231)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3095-3100
  5. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.00
    0.0028993662 = product of:
      0.008698098 = sum of:
        0.008698098 = product of:
          0.017396197 = sum of:
            0.017396197 = weight(_text_:of in 288) [ClassicSimilarity], result of:
              0.017396197 = score(doc=288,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.25392252 = fieldWeight in 288, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=288)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1442-1458
  6. Bornmann, L.: Nature's top 100 revisited (2015) 0.00
    0.0027899165 = product of:
      0.008369749 = sum of:
        0.008369749 = product of:
          0.016739499 = sum of:
            0.016739499 = weight(_text_:of in 2351) [ClassicSimilarity], result of:
              0.016739499 = score(doc=2351,freq=4.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24433708 = fieldWeight in 2351, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2351)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Bezug: Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2714. Vgl.: http://onlinelibrary.wiley.com/doi/10.1002/asi.23554/abstract.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.10, S.2166
  7. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.00
    0.0027899165 = product of:
      0.008369749 = sum of:
        0.008369749 = product of:
          0.016739499 = sum of:
            0.016739499 = weight(_text_:of in 3432) [ClassicSimilarity], result of:
              0.016739499 = score(doc=3432,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24433708 = fieldWeight in 3432, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3432)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.653-659
  8. Collins, H.; Bornmann, L.: On scientific misconduct (2014) 0.00
    0.0027618767 = product of:
      0.00828563 = sum of:
        0.00828563 = product of:
          0.01657126 = sum of:
            0.01657126 = weight(_text_:of in 1247) [ClassicSimilarity], result of:
              0.01657126 = score(doc=1247,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24188137 = fieldWeight in 1247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1247)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.5, S.1089-1090
  9. Bornmann, L.: Scientific peer review (2011) 0.00
    0.0027618767 = product of:
      0.00828563 = sum of:
        0.00828563 = product of:
          0.01657126 = sum of:
            0.01657126 = weight(_text_:of in 1600) [ClassicSimilarity], result of:
              0.01657126 = score(doc=1600,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24188137 = fieldWeight in 1600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1600)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Annual review of information science and technology. 45(2011) no.1, S.197-245
  10. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V.: ¬A new approach to the QS university ranking using the composite I-distance indicator : uncertainty and sensitivity analyses (2016) 0.00
    0.0026467475 = product of:
      0.007940242 = sum of:
        0.007940242 = product of:
          0.015880484 = sum of:
            0.015880484 = weight(_text_:of in 2500) [ClassicSimilarity], result of:
              0.015880484 = score(doc=2500,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.23179851 = fieldWeight in 2500, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Some major concerns of universities are to provide quality in higher education and enhance global competitiveness, thus ensuring a high global rank and an excellent performance evaluation. This article examines the Quacquarelli Symonds (QS) World University Ranking methodology, pointing to a drawback of using subjective, possibly biased, weightings to build a composite indicator (QS scores). We propose an alternative approach to creating QS scores, which is referred to as the composite I-distance indicator (CIDI) methodology. The main contribution is the proposal of a composite indicator weights correction based on the CIDI methodology. It leads to the improved stability and reduced uncertainty of the QS ranking system. The CIDI methodology is also applicable to other university rankings by proposing a specific statistical approach to creating a composite indicator.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.1, S.200-211
  11. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.00
    0.0026171738 = product of:
      0.0078515215 = sum of:
        0.0078515215 = product of:
          0.015703043 = sum of:
            0.015703043 = weight(_text_:of in 2381) [ClassicSimilarity], result of:
              0.015703043 = score(doc=2381,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2292085 = fieldWeight in 2381, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2381)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.
    Content
    Vgl. auch: Erratum Re: Selecting manuscripts for a high-impact journal through peer review: A citation analysis of communications that were accepted by Agewandte Chemie International Edition, or rejected but published elsewhere. In: Journal of the American Society for Information Science and Technology 59(2008) no.12, S.2037-2038.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.11, S.1841-1852
  12. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.00
    0.002609728 = product of:
      0.007829184 = sum of:
        0.007829184 = product of:
          0.015658367 = sum of:
            0.015658367 = weight(_text_:of in 4132) [ClassicSimilarity], result of:
              0.015658367 = score(doc=4132,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.22855641 = fieldWeight in 4132, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In a comprehensive research project, we investigated the predictive validity of selection decisions and reviewers' ratings at the open access journal Atmospheric Chemistry and Physics (ACP). ACP is a high-impact journal publishing papers on the Earth's atmosphere and the underlying chemical and physical processes. Scientific journals have to deal with the following question concerning the predictive validity: Are in fact the "best" scientific works selected from the manuscripts submitted? In this study we examined whether selecting the "best" manuscripts means selecting papers that after publication show top citation performance as compared to other papers in this research area. First, we appraised the citation impact of later published manuscripts based on the percentile citedness rank classes of the population distribution (scaling in a specific subfield). Second, we analyzed the association between the decisions (n = 677 accepted or rejected, but published elsewhere manuscripts) or ratings (reviewers' ratings for n = 315 manuscripts), respectively, and the citation impact classes of the manuscripts. The results confirm the predictive validity of the ACP peer review system.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.61-71
  13. Bornmann, L.; Ye, A.; Ye, F.: Identifying landmark publications in the long run using field-normalized citation data (2018) 0.00
    0.002609728 = product of:
      0.007829184 = sum of:
        0.007829184 = product of:
          0.015658367 = sum of:
            0.015658367 = weight(_text_:of in 4196) [ClassicSimilarity], result of:
              0.015658367 = score(doc=4196,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.22855641 = fieldWeight in 4196, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4196)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The purpose of this paper is to propose an approach for identifying landmark papers in the long run. These publications reach a very high level of citation impact and are able to remain on this level across many citing years. In recent years, several studies have been published which deal with the citation history of publications and try to identify landmark publications. Design/methodology/approach In contrast to other studies published hitherto, this study is based on a broad data set with papers published between 1980 and 1990 for identifying the landmark papers. The authors analyzed the citation histories of about five million papers across 25 years. Findings The results of this study reveal that 1,013 papers (less than 0.02 percent) are "outstandingly cited" in the long run. The cluster analyses of the papers show that they received the high impact level very soon after publication and remained on this level over decades. Only a slight impact decline is visible over the years. Originality/value For practical reasons, approaches for identifying landmark papers should be as simple as possible. The approach proposed in this study is based on standard methods in bibliometrics.
    Source
    Journal of documentation. 74(2018) no.2, S.278-288
  14. Bornmann, L.; Haunschild, R.: Relative Citation Ratio (RCR) : an empirical attempt to study a new field-normalized bibliometric indicator (2017) 0.00
    0.0023918552 = product of:
      0.0071755657 = sum of:
        0.0071755657 = product of:
          0.014351131 = sum of:
            0.014351131 = weight(_text_:of in 3541) [ClassicSimilarity], result of:
              0.014351131 = score(doc=3541,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20947541 = fieldWeight in 3541, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3541)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Hutchins, Yuan, Anderson, and Santangelo (2015) proposed the Relative Citation Ratio (RCR) as a new field-normalized impact indicator. This study investigates the RCR by correlating it on the level of single publications with established field-normalized indicators and assessments of the publications by peers. We find that the RCR correlates highly with established field-normalized indicators, but the correlation between RCR and peer assessments is only low to medium.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1064-1067
  15. Bornmann, L.; Leydesdorff, L.: Statistical tests and research assessments : a comment on Schneider (2012) (2013) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 752) [ClassicSimilarity], result of:
              0.014203937 = score(doc=752,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=752)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1306-1308
  16. Bornmann, L.; Moya Anegón, F. de; Mutz, R.: Do universities or research institutions with a specific subject profile have an advantage or a disadvantage in institutional rankings? (2013) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 1109) [ClassicSimilarity], result of:
              0.014203937 = score(doc=1109,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 1109, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1109)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Using data compiled for the SCImago Institutions Ranking, we look at whether the subject area type an institution (university or research-focused institution) belongs to (in terms of the fields researched) has an influence on its ranking position. We used latent class analysis to categorize institutions based on their publications in certain subject areas. Even though this categorization does not relate directly to scientific performance, our results show that it exercises an important influence on the outcome of a performance measurement: Certain subject area types of institutions have an advantage in the ranking positions when compared with others. This advantage manifests itself not only when performance is measured with an indicator that is not field-normalized but also for indicators that are field-normalized.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2310-2316
  17. Bornmann, L.: Is there currently a scientific revolution in Scientometrics? (2014) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 1206) [ClassicSimilarity], result of:
              0.014203937 = score(doc=1206,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 1206, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1206)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.3, S.647-648
  18. Bornmann, L.: What do altmetrics counts mean? : a plea for content analyses (2016) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 2858) [ClassicSimilarity], result of:
              0.014203937 = score(doc=2858,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 2858, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2858)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.1016-1017
  19. Besselaar, P. van den; Wagner, C,; Bornmann, L.: Correct assumptions? (2016) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 3020) [ClassicSimilarity], result of:
              0.014203937 = score(doc=3020,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 3020, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3020)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1779
  20. Leydesdorff, L.; Wagner, C,; Bornmann, L.: Replicability and the public/private divide (2016) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 3023) [ClassicSimilarity], result of:
              0.014203937 = score(doc=3023,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 3023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3023)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1777-1778