Search (3 results, page 1 of 1)

  • × author_ss:"Waltman, L."
  • × theme_ss:"Informetrie"
  1. Waltman, L.; Eck, N.J. van: ¬The inconsistency of the h-index : the case of web accessibility in Western European countries (2012) 0.01
    0.0052265706 = product of:
      0.031359423 = sum of:
        0.031359423 = weight(_text_:web in 40) [ClassicSimilarity], result of:
          0.031359423 = score(doc=40,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 40, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=40)
      0.16666667 = coord(1/6)
    
  2. Waltman, L.; Costas, R.: F1000 Recommendations as a potential new data source for research evaluation : a comparison with citations (2014) 0.01
    0.0052265706 = product of:
      0.031359423 = sum of:
        0.031359423 = weight(_text_:web in 1212) [ClassicSimilarity], result of:
          0.031359423 = score(doc=1212,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 1212, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1212)
      0.16666667 = coord(1/6)
    
    Abstract
    F1000 is a postpublication peer review service for biological and medical research. F1000 recommends important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and more than 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications.
  3. Waltman, L.; Eck, N.J. van: ¬A new methodology for constructing a publication-level classification system of science : keyword maps in Google scholar citations (2012) 0.00
    0.004355476 = product of:
      0.026132854 = sum of:
        0.026132854 = weight(_text_:web in 511) [ClassicSimilarity], result of:
          0.026132854 = score(doc=511,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 511, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=511)
      0.16666667 = coord(1/6)
    
    Abstract
    Classifying journals or publications into research areas is an essential element of many bibliometric analyses. Classification usually takes place at the level of journals, where the Web of Science subject categories are the most popular classification system. However, journal-level classification systems have two important limitations: They offer only a limited amount of detail, and they have difficulties with multidisciplinary journals. To avoid these limitations, we introduce a new methodology for constructing classification systems at the level of individual publications. In the proposed methodology, publications are clustered into research areas based on citation relations. The methodology is able to deal with very large numbers of publications. We present an application in which a classification system is produced that includes almost 10 million publications. Based on an extensive analysis of this classification system, we discuss the strengths and the limitations of the proposed methodology. Important strengths are the transparency and relative simplicity of the methodology and its fairly modest computing and memory requirements. The main limitation of the methodology is its exclusive reliance on direct citation relations between publications. The accuracy of the methodology can probably be increased by also taking into account other types of relations-for instance, based on bibliographic coupling.