Search (16 results, page 1 of 1)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.07
    0.06763022 = product of:
      0.13526043 = sum of:
        0.13526043 = sum of:
          0.10064465 = weight(_text_:journals in 4186) [ClassicSimilarity], result of:
            0.10064465 = score(doc=4186,freq=4.0), product of:
              0.25656942 = queryWeight, product of:
                5.021064 = idf(docFreq=792, maxDocs=44218)
                0.05109862 = queryNorm
              0.39227062 = fieldWeight in 4186, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.021064 = idf(docFreq=792, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4186)
          0.03461579 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
            0.03461579 = score(doc=4186,freq=2.0), product of:
              0.17893866 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05109862 = queryNorm
              0.19345059 = fieldWeight in 4186, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4186)
      0.5 = coord(1/2)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  2. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.03
    0.030816004 = product of:
      0.061632007 = sum of:
        0.061632007 = product of:
          0.123264015 = sum of:
            0.123264015 = weight(_text_:journals in 4919) [ClassicSimilarity], result of:
              0.123264015 = score(doc=4919,freq=6.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.48043144 = fieldWeight in 4919, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4919)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
  3. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.03
    0.030816004 = product of:
      0.061632007 = sum of:
        0.061632007 = product of:
          0.123264015 = sum of:
            0.123264015 = weight(_text_:journals in 532) [ClassicSimilarity], result of:
              0.123264015 = score(doc=532,freq=6.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.48043144 = fieldWeight in 532, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=532)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
  4. Bornmann, L.; Thor, A.; Marx, W.; Schier, H.: ¬The application of bibliometrics to research evaluation in the humanities and social sciences : an exploratory study using normalized Google Scholar data for the publications of a research institute (2016) 0.03
    0.025161162 = product of:
      0.050322324 = sum of:
        0.050322324 = product of:
          0.10064465 = sum of:
            0.10064465 = weight(_text_:journals in 3160) [ClassicSimilarity], result of:
              0.10064465 = score(doc=3160,freq=4.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.39227062 = fieldWeight in 3160, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3160)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the humanities and social sciences, bibliometric methods for the assessment of research performance are (so far) less common. This study uses a concrete example in an attempt to evaluate a research institute from the area of social sciences and humanities with the help of data from Google Scholar (GS). In order to use GS for a bibliometric study, we developed procedures for the normalization of citation impact, building on the procedures of classical bibliometrics. In order to test the convergent validity of the normalized citation impact scores, we calculated normalized scores for a subset of the publications based on data from the Web of Science (WoS) and Scopus. Even if scores calculated with the help of GS and the WoS/Scopus are not identical for the different publication types (considered here), they are so similar that they result in the same assessment of the institute investigated in this study: For example, the institute's papers whose journals are covered in the WoS are cited at about an average rate (compared with the other papers in the journals).
  5. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.02
    0.021349952 = product of:
      0.042699903 = sum of:
        0.042699903 = product of:
          0.08539981 = sum of:
            0.08539981 = weight(_text_:journals in 1108) [ClassicSimilarity], result of:
              0.08539981 = score(doc=1108,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.33285263 = fieldWeight in 1108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1108)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Two methods for comparing impact factors and citation rates across fields of science are tested against each other using citations to the 3,705 journals in the Science Citation Index 2010 (CD-Rom version of SCI) and the 13 field categories used for the Science and Engineering Indicators of the U.S. National Science Board. We compare (a) normalization by counting citations in proportion to the length of the reference list (1/N of references) with (b) rescaling by dividing citation scores by the arithmetic mean of the citation rate of the cluster. Rescaling is analytical and therefore independent of the quality of the attribution to the sets, whereas fractional counting provides an empirical strategy for normalization among sets (by evaluating the between-group variance). By the fairness test of Radicchi and Castellano (), rescaling outperforms fractional counting of citations for reasons that we consider.
  6. Bornmann, L.: Interrater reliability and convergent validity of F1000Prime peer review (2015) 0.02
    0.021349952 = product of:
      0.042699903 = sum of:
        0.042699903 = product of:
          0.08539981 = sum of:
            0.08539981 = weight(_text_:journals in 2328) [ClassicSimilarity], result of:
              0.08539981 = score(doc=2328,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.33285263 = fieldWeight in 2328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2328)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Peer review is the backbone of modern science. F1000Prime is a postpublication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the interrater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on about 100,000 papers with recommendations from faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between faculty members. This result is in agreement with most other studies that have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the faculty members is significantly higher than expected. In addition, better recommendation scores are also associated with higher performing papers.
  7. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.02
    0.021349952 = product of:
      0.042699903 = sum of:
        0.042699903 = product of:
          0.08539981 = sum of:
            0.08539981 = weight(_text_:journals in 2779) [ClassicSimilarity], result of:
              0.08539981 = score(doc=2779,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.33285263 = fieldWeight in 2779, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2779)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
  8. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.02
    0.021349952 = product of:
      0.042699903 = sum of:
        0.042699903 = product of:
          0.08539981 = sum of:
            0.08539981 = weight(_text_:journals in 3230) [ClassicSimilarity], result of:
              0.08539981 = score(doc=3230,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.33285263 = fieldWeight in 3230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3230)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  9. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.020769471 = product of:
      0.041538943 = sum of:
        0.041538943 = product of:
          0.083077885 = sum of:
            0.083077885 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.083077885 = score(doc=1239,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
  10. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.02
    0.020128928 = product of:
      0.040257856 = sum of:
        0.040257856 = product of:
          0.08051571 = sum of:
            0.08051571 = weight(_text_:journals in 2381) [ClassicSimilarity], result of:
              0.08051571 = score(doc=2381,freq=4.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.3138165 = fieldWeight in 2381, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2381)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.
  11. Marx, W.; Bornmann, L.; Cardona, M.: Reference standards and reference multipliers for the comparison of the citation impact of papers published in different time periods (2010) 0.02
    0.017791625 = product of:
      0.03558325 = sum of:
        0.03558325 = product of:
          0.0711665 = sum of:
            0.0711665 = weight(_text_:journals in 3998) [ClassicSimilarity], result of:
              0.0711665 = score(doc=3998,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.2773772 = fieldWeight in 3998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3998)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this study, reference standards and reference multipliers are suggested as a means to compare the citation impact of earlier research publications in physics (from the period of "Little Science" in the early 20th century) with that of contemporary papers (from the period of "Big Science," beginning around 1960). For the development of time-specific reference standards, the authors determined (a) the mean citation rates of papers in selected physics journals as well as (b) the mean citation rates of all papers in physics published in 1900 (Little Science) and in 2000 (Big Science); this was accomplished by relying on the processes of field-specific standardization in bibliometry. For the sake of developing reference multipliers with which the citation impact of earlier papers can be adjusted to the citation impact of contemporary papers, they combined the reference standards calculated for 1900 and 2000 into their ratio. The use of reference multipliers is demonstrated by means of two examples involving the time adjusted h index values for Max Planck and Albert Einstein.
  12. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.02
    0.017791625 = product of:
      0.03558325 = sum of:
        0.03558325 = product of:
          0.0711665 = sum of:
            0.0711665 = weight(_text_:journals in 4132) [ClassicSimilarity], result of:
              0.0711665 = score(doc=4132,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.2773772 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a comprehensive research project, we investigated the predictive validity of selection decisions and reviewers' ratings at the open access journal Atmospheric Chemistry and Physics (ACP). ACP is a high-impact journal publishing papers on the Earth's atmosphere and the underlying chemical and physical processes. Scientific journals have to deal with the following question concerning the predictive validity: Are in fact the "best" scientific works selected from the manuscripts submitted? In this study we examined whether selecting the "best" manuscripts means selecting papers that after publication show top citation performance as compared to other papers in this research area. First, we appraised the citation impact of later published manuscripts based on the percentile citedness rank classes of the population distribution (scaling in a specific subfield). Second, we analyzed the association between the decisions (n = 677 accepted or rejected, but published elsewhere manuscripts) or ratings (reviewers' ratings for n = 315 manuscripts), respectively, and the citation impact classes of the manuscripts. The results confirm the predictive validity of the ACP peer review system.
  13. Bornmann, L.; Haunschild, R.: ¬An empirical look at the nature index (2017) 0.02
    0.017791625 = product of:
      0.03558325 = sum of:
        0.03558325 = product of:
          0.0711665 = sum of:
            0.0711665 = weight(_text_:journals in 3432) [ClassicSimilarity], result of:
              0.0711665 = score(doc=3432,freq=2.0), product of:
                0.25656942 = queryWeight, product of:
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.05109862 = queryNorm
                0.2773772 = fieldWeight in 3432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.021064 = idf(docFreq=792, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3432)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In November 2014, the Nature Index (NI) was introduced (see http://www.natureindex.com) by the Nature Publishing Group (NPG). The NI comprises the primary research articles published in the past 12 months in a selection of reputable journals. Starting from two short comments on the NI (Haunschild & Bornmann, 2015a, 2015b), we undertake an empirical analysis of the NI using comprehensive country data. We investigate whether the huge efforts of computing the NI are justified and whether the size-dependent NI indicators should be complemented by size-independent variants. The analysis uses data from the Max Planck Digital Library in-house database (which is based on Web of Science data) and from the NPG. In the first step of the analysis, we correlate the NI with other metrics that are simpler to generate than the NI. The resulting large correlation coefficients point out that the NI produces similar results as simpler solutions. In the second step of the analysis, relative and size-independent variants of the NI are generated that should be additionally presented by the NPG. The size-dependent NI indicators favor large countries (or institutions) and the top-performing small countries (or institutions) do not come into the picture.
  14. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.0138463145 = product of:
      0.027692629 = sum of:
        0.027692629 = product of:
          0.055385258 = sum of:
            0.055385258 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.055385258 = score(doc=1431,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  15. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.01
    0.010384736 = product of:
      0.020769471 = sum of:
        0.020769471 = product of:
          0.041538943 = sum of:
            0.041538943 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.041538943 = score(doc=656,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:44:17
  16. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.01
    0.010384736 = product of:
      0.020769471 = sum of:
        0.020769471 = product of:
          0.041538943 = sum of:
            0.041538943 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.041538943 = score(doc=4681,freq=2.0), product of:
                0.17893866 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05109862 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 1.2019 18:22:45