Search (3 results, page 1 of 1)

  • × author_ss:"Waltman, L."
  • × theme_ss:"Informetrie"
  1. Waltman, L.; Eck, N.J. van: ¬The inconsistency of the h-index : the case of web accessibility in Western European countries (2012) 0.03
    0.034039076 = product of:
      0.17019537 = sum of:
        0.17019537 = weight(_text_:index in 40) [ClassicSimilarity], result of:
          0.17019537 = score(doc=40,freq=20.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.91603965 = fieldWeight in 40, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=40)
      0.2 = coord(1/5)
    
    Abstract
    The h-index is a popular bibliometric indicator for assessing individual scientists. We criticize the h-index from a theoretical point of view. We argue that for the purpose of measuring the overall scientific impact of a scientist (or some other unit of analysis), the h-index behaves in a counterintuitive way. In certain cases, the mechanism used by the h-index to aggregate publication and citation statistics into a single number leads to inconsistencies in the way in which scientists are ranked. Our conclusion is that the h-index cannot be considered an appropriate indicator of a scientist's overall scientific impact. Based on recent theoretical insights, we discuss what kind of indicators can be used as an alternative to the h-index. We pay special attention to the highly cited publications indicator. This indicator has a lot in common with the h-index, but unlike the h-index it does not produce inconsistent rankings.
    Object
    h-index
  2. Waltman, L.; Eck, N.J. van: ¬A new methodology for constructing a publication-level classification system of science : keyword maps in Google scholar citations (2012) 0.01
    0.009319837 = product of:
      0.046599183 = sum of:
        0.046599183 = weight(_text_:system in 511) [ClassicSimilarity], result of:
          0.046599183 = score(doc=511,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3479797 = fieldWeight in 511, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=511)
      0.2 = coord(1/5)
    
    Abstract
    Classifying journals or publications into research areas is an essential element of many bibliometric analyses. Classification usually takes place at the level of journals, where the Web of Science subject categories are the most popular classification system. However, journal-level classification systems have two important limitations: They offer only a limited amount of detail, and they have difficulties with multidisciplinary journals. To avoid these limitations, we introduce a new methodology for constructing classification systems at the level of individual publications. In the proposed methodology, publications are clustered into research areas based on citation relations. The methodology is able to deal with very large numbers of publications. We present an application in which a classification system is produced that includes almost 10 million publications. Based on an extensive analysis of this classification system, we discuss the strengths and the limitations of the proposed methodology. Important strengths are the transparency and relative simplicity of the methodology and its fairly modest computing and memory requirements. The main limitation of the methodology is its exclusive reliance on direct citation relations between publications. The accuracy of the methodology can probably be increased by also taking into account other types of relations-for instance, based on bibliographic coupling.
  3. Hicks, D.; Wouters, P.; Waltman, L.; Rijcke, S. de; Rafols, I.: ¬The Leiden Manifesto for research metrics : 10 principles to guide research evaluation (2015) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 1994) [ClassicSimilarity], result of:
          0.03261943 = score(doc=1994,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 1994, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1994)
      0.2 = coord(1/5)
    
    Abstract
    Research evaluation has become routine and often relies on metrics. But it is increasingly driven by data and not by expert judgement. As a result, the procedures that were designed to increase the quality of research are now threatening to damage the scientific system. To support researchers and managers, five experts led by Diana Hicks, professor in the School of Public Policy at Georgia Institute of Technology, and Paul Wouters, director of CWTS at Leiden University, have proposed ten principles for the measurement of research performance: the Leiden Manifesto for Research Metrics published as a comment in Nature.