Search (25 results, page 1 of 2)

  • × author_ss:"Bornmann, L."
  1. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.02
    0.020136671 = product of:
      0.040273342 = sum of:
        0.040273342 = product of:
          0.06041001 = sum of:
            0.029749434 = weight(_text_:c in 4186) [ClassicSimilarity], result of:
              0.029749434 = score(doc=4186,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
            0.030660577 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.030660577 = score(doc=4186,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  2. Bornmann, L.; Daniel, H.-D.: Universality of citation distributions : a validation of Radicchi et al.'s relative indicator cf = c/c0 at the micro level using data from chemistry (2009) 0.02
    0.015060814 = product of:
      0.030121628 = sum of:
        0.030121628 = product of:
          0.04518244 = sum of:
            0.029749434 = weight(_text_:c in 2954) [ClassicSimilarity], result of:
              0.029749434 = score(doc=2954,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 2954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2954)
            0.015433006 = weight(_text_:h in 2954) [ClassicSimilarity], result of:
              0.015433006 = score(doc=2954,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.13724773 = fieldWeight in 2954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2954)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  3. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.01
    0.012264231 = product of:
      0.024528462 = sum of:
        0.024528462 = product of:
          0.07358538 = sum of:
            0.07358538 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.07358538 = score(doc=1239,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
  4. Besselaar, P. van den; Wagner, C,; Bornmann, L.: Correct assumptions? (2016) 0.01
    0.011899775 = product of:
      0.02379955 = sum of:
        0.02379955 = product of:
          0.071398646 = sum of:
            0.071398646 = weight(_text_:c in 3020) [ClassicSimilarity], result of:
              0.071398646 = score(doc=3020,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.45733082 = fieldWeight in 3020, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3020)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  5. Leydesdorff, L.; Wagner, C,; Bornmann, L.: Replicability and the public/private divide (2016) 0.01
    0.011899775 = product of:
      0.02379955 = sum of:
        0.02379955 = product of:
          0.071398646 = sum of:
            0.071398646 = weight(_text_:c in 3023) [ClassicSimilarity], result of:
              0.071398646 = score(doc=3023,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.45733082 = fieldWeight in 3023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3023)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Bornmann, L.; Daniel, H.-D.: What do we know about the h index? (2007) 0.01
    0.010185264 = product of:
      0.020370528 = sum of:
        0.020370528 = product of:
          0.061111584 = sum of:
            0.061111584 = weight(_text_:h in 477) [ClassicSimilarity], result of:
              0.061111584 = score(doc=477,freq=16.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.54347324 = fieldWeight in 477, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=477)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Jorge Hirsch recently proposed the h index to quantify the research output of individual scientists. The new index has attracted a lot of attention in the scientific community. The claim that the h index in a single number provides a good representation of the scientific lifetime achievement of a scientist as well as the (supposed) simple calculation of the h index using common literature databases lead to the danger of improper use of the index. We describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of this index. We also introduce corrections and complements as well as single-number alternatives to the h index.
    Object
    H-Index
  7. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.0081761535 = product of:
      0.016352307 = sum of:
        0.016352307 = product of:
          0.04905692 = sum of:
            0.04905692 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.04905692 = score(doc=1431,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  8. Collins, H.; Bornmann, L.: On scientific misconduct (2014) 0.01
    0.0072020693 = product of:
      0.014404139 = sum of:
        0.014404139 = product of:
          0.043212414 = sum of:
            0.043212414 = weight(_text_:h in 1247) [ClassicSimilarity], result of:
              0.043212414 = score(doc=1247,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.38429362 = fieldWeight in 1247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1247)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. Bornmann, L.: How to analyze percentile citation impact data meaningfully in bibliometrics : the statistical analysis of distributions, percentile rank classes, and top-cited papers (2013) 0.01
    0.0061321156 = product of:
      0.012264231 = sum of:
        0.012264231 = product of:
          0.03679269 = sum of:
            0.03679269 = weight(_text_:22 in 656) [ClassicSimilarity], result of:
              0.03679269 = score(doc=656,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.23214069 = fieldWeight in 656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=656)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:44:17
  10. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.01
    0.0061321156 = product of:
      0.012264231 = sum of:
        0.012264231 = product of:
          0.03679269 = sum of:
            0.03679269 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.03679269 = score(doc=4681,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    8. 1.2019 18:22:45
  11. Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; Nooy, W. de: Field-normalized impact factors (IFs) : a comparison of rescaling and fractionally counted IFs (2013) 0.01
    0.0059498874 = product of:
      0.011899775 = sum of:
        0.011899775 = product of:
          0.035699323 = sum of:
            0.035699323 = weight(_text_:c in 1108) [ClassicSimilarity], result of:
              0.035699323 = score(doc=1108,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 1108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1108)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Bornmann, L.; Mutz, R.; Daniel, H.-D.: Are there better indices for evaluation purposes than the h index? : a comparison of nine different variants of the h index using data from biomedicine (2008) 0.01
    0.0057515423 = product of:
      0.011503085 = sum of:
        0.011503085 = product of:
          0.034509253 = sum of:
            0.034509253 = weight(_text_:h in 1608) [ClassicSimilarity], result of:
              0.034509253 = score(doc=1608,freq=10.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.30689526 = fieldWeight in 1608, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In this study, we examined empirical results on the h index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. The results of a factor analysis using bibliographic data on postdoctoral researchers in biomedicine indicate that regarding the h index and its variants, we are dealing with two types of indices that load on one factor each. One type describes the most productive core of a scientist's output and gives the number of papers in that core. The other type of indices describes the impact of the papers in the core. Because an index for evaluative purposes is a useful yardstick for comparison among scientists if the index corresponds strongly with peer assessments, we calculated a logistic regression analysis with the two factors resulting from the factor analysis as independent variables and peer assessment of the postdoctoral researchers as the dependent variable. The results of the regression analysis show that peer assessments can be predicted better using the factor impact of the productive core than using the factor quantity of the productive core.
  13. Bornmann, L.; Mutz, R.; Daniel, H.D.: Do we need the h index and its variants in addition to standard bibliometric measures? (2009) 0.01
    0.0057515423 = product of:
      0.011503085 = sum of:
        0.011503085 = product of:
          0.034509253 = sum of:
            0.034509253 = weight(_text_:h in 2861) [ClassicSimilarity], result of:
              0.034509253 = score(doc=2861,freq=10.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.30689526 = fieldWeight in 2861, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In this study, we investigate whether there is a need for the h index and its variants in addition to standard bibliometric measures (SBMs). Results from our recent study (L. Bornmann, R. Mutz, & H.-D. Daniel, 2008) have indicated that there are two types of indices: One type of indices (e.g., h index) describes the most productive core of a scientist's output and informs about the number of papers in the core. The other type of indices (e.g., a index) depicts the impact of the papers in the core. In evaluative bibliometric studies, the two dimensions quantity and quality of output are usually assessed using the SBMs number of publications (for the quantity dimension) and total citation counts (for the impact dimension). We additionally included the SBMs into the factor analysis. The results of the newly calculated analysis indicate that there is a high intercorrelation between number of publications and the indices that load substantially on the factor Quantity of the Productive Core as well as between total citation counts and the indices that load substantially on the factor Impact of the Productive Core. The high-loading indices and SBMs within one performance dimension could be called redundant in empirical application, as high intercorrelations between different indicators are a sign for measuring something similar (or the same). Based on our findings, we propose the use of any pair of indicators (one relating to the number of papers in a researcher's productive core and one relating to the impact of these core papers) as a meaningful approach for comparing scientists.
    Object
    h-Index
  14. Bornmann, L.; Marx, W.: ¬The Anna Karenina principle : a way of thinking about success in science (2012) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 449) [ClassicSimilarity], result of:
              0.029749434 = score(doc=449,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 449, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=449)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The first sentence of Leo Tolstoy's (1875-1877/2001) novel Anna Karenina is: "Happy families are all alike; every unhappy family is unhappy in its own way." Here, Tolstoy means that for a family to be happy, several key aspects must be given (e.g., good health of all family members, acceptable financial security, and mutual affection). If there is a deficiency in any one or more of these key aspects, the family will be unhappy. In this article, we introduce the Anna Karenina principle as a way of thinking about success in science in three central areas in (modern) science: (a) peer review of research grant proposals and manuscripts (money and journal space as scarce resources), (b) citation of publications (reception as a scarce resource), and (c) new scientific discoveries (recognition as a scarce resource). If resources are scarce at the highly competitive research front (journal space, funds, reception, and recognition), there can be success only when several key prerequisites for the allocation of the resources are fulfilled. If any one of these prerequisites is not fulfilled, the grant proposal, manuscript submission, the published paper, or the discovery will not be successful.
  15. Bornmann, L.; Wagner, C.; Leydesdorff, L.: BRICS countries and scientific excellence : a bibliometric analysis of most frequently cited papers (2015) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 2047) [ClassicSimilarity], result of:
              0.029749434 = score(doc=2047,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2047)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 3231) [ClassicSimilarity], result of:
              0.029749434 = score(doc=3231,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 3231, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3231)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
  17. Leydesdorff, L.; Bornmann, L.; Mingers, J.: Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the Leiden rankings (2019) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 5225) [ClassicSimilarity], result of:
              0.029749434 = score(doc=5225,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 5225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5225)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The Leiden Rankings can be used for grouping research universities by considering universities which are not statistically significantly different as homogeneous sets. The groups and intergroup relations can be analyzed and visualized using tools from network analysis. Using the so-called "excellence indicator" PPtop-10%-the proportion of the top-10% most-highly-cited papers assigned to a university-we pursue a classification using (a) overlapping stability intervals, (b) statistical-significance tests, and (c) effect sizes of differences among 902 universities in 54 countries; we focus on the UK, Germany, Brazil, and the USA as national examples. Although the groupings remain largely the same using different statistical significance levels or overlapping stability intervals, these classifications are uncorrelated with those based on effect sizes. Effect sizes for the differences between universities are small (w < .2). The more detailed analysis of universities at the country level suggests that distinctions beyond three or perhaps four groups of universities (high, middle, low) may not be meaningful. Given similar institutional incentives, isomorphism within each eco-system of universities should not be underestimated. Our results suggest that networks based on overlapping stability intervals can provide a first impression of the relevant groupings among universities. However, the clusters are not well-defined divisions between groups of universities.
  18. Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D.: Is interactive open access publishing able to identify high-impact submissions? : a study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes (2011) 0.00
    0.0036375946 = product of:
      0.0072751893 = sum of:
        0.0072751893 = product of:
          0.021825567 = sum of:
            0.021825567 = weight(_text_:h in 4132) [ClassicSimilarity], result of:
              0.021825567 = score(doc=4132,freq=4.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1940976 = fieldWeight in 4132, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4132)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Bornmann, L.: Lässt sich die Qualität von Forschung messen? (2013) 0.00
    0.0030866012 = product of:
      0.0061732023 = sum of:
        0.0061732023 = product of:
          0.018519606 = sum of:
            0.018519606 = weight(_text_:h in 928) [ClassicSimilarity], result of:
              0.018519606 = score(doc=928,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.16469726 = fieldWeight in 928, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=928)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Grundsätzlich können wir bei Bewertungen in der Wissenschaft zwischen einer 'qualitative' Form, der Bewertung einer wissenschaftlichen Arbeit (z. B. eines Manuskripts oder Forschungsantrags) durch kompetente Peers, und einer 'quantitative' Form, der Bewertung von wissenschaftlicher Arbeit anhand bibliometrischer Indikatoren unterscheiden. Beide Formen der Bewertung sind nicht unumstritten. Die Kritiker des Peer Review sehen vor allem zwei Schwächen des Verfahrens: (1) Verschiedene Gutachter würden kaum in der Bewertung ein und derselben wissenschaftlichen Arbeit übereinstimmen. (2) Gutachterliche Empfehlungen würden systematische Urteilsverzerrungen aufweisen. Gegen die Verwendung von Zitierhäufigkeiten als Indikator für die Qualität einer wissenschaftlichen Arbeit wird seit Jahren eine Vielzahl von Bedenken geäußert. Zitierhäufigkeiten seien keine 'objektiven' Messungen von wissenschaftlicher Qualität, sondern ein kritisierbares Messkonstrukt. So wird unter anderem kritisiert, dass wissenschaftliche Qualität ein komplexes Phänomen darstelle, das nicht auf einer eindimensionalen Skala (d. h. anhand von Zitierhäufigkeiten) gemessen werden könne. Es werden empirische Ergebnisse zur Reliabilität und Fairness des Peer Review Verfahrens sowie Forschungsergebnisse zur Güte von Zitierhäufigkeiten als Indikator für wissenschaftliche Qualität vorgestellt.
  20. Bornmann, L.; Daniel, H.-D.: Multiple publication on a single research study: does it pay? : The influence of number of research articles on total citation counts in biomedicine (2007) 0.00
    0.0025721677 = product of:
      0.0051443353 = sum of:
        0.0051443353 = product of:
          0.015433006 = sum of:
            0.015433006 = weight(_text_:h in 444) [ClassicSimilarity], result of:
              0.015433006 = score(doc=444,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.13724773 = fieldWeight in 444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=444)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)