Search (13 results, page 1 of 1)

  • × author_ss:"Waltman, L."
  1. Waltman, L.; Eck, N.J. van: ¬The inconsistency of the h-index : the case of web accessibility in Western European countries (2012) 0.08
    0.083759755 = product of:
      0.12563963 = sum of:
        0.023478512 = weight(_text_:science in 40) [ClassicSimilarity], result of:
          0.023478512 = score(doc=40,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 40, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=40)
        0.102161124 = product of:
          0.20432225 = sum of:
            0.20432225 = weight(_text_:index in 40) [ClassicSimilarity], result of:
              0.20432225 = score(doc=40,freq=20.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.91603965 = fieldWeight in 40, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.046875 = fieldNorm(doc=40)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The h-index is a popular bibliometric indicator for assessing individual scientists. We criticize the h-index from a theoretical point of view. We argue that for the purpose of measuring the overall scientific impact of a scientist (or some other unit of analysis), the h-index behaves in a counterintuitive way. In certain cases, the mechanism used by the h-index to aggregate publication and citation statistics into a single number leads to inconsistencies in the way in which scientists are ranked. Our conclusion is that the h-index cannot be considered an appropriate indicator of a scientist's overall scientific impact. Based on recent theoretical insights, we discuss what kind of indicators can be used as an alternative to the h-index. We pay special attention to the highly cited publications indicator. This indicator has a lot in common with the h-index, but unlike the h-index it does not produce inconsistent rankings.
    Object
    h-index
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.406-415
  2. Eck, N.J. van; Waltman, L.: How to normalize cooccurrence data? : an analysis of some well-known similarity measures (2009) 0.06
    0.058727246 = product of:
      0.08809087 = sum of:
        0.023478512 = weight(_text_:science in 2942) [ClassicSimilarity], result of:
          0.023478512 = score(doc=2942,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 2942, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2942)
        0.06461236 = product of:
          0.12922472 = sum of:
            0.12922472 = weight(_text_:index in 2942) [ClassicSimilarity], result of:
              0.12922472 = score(doc=2942,freq=8.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.5793543 = fieldWeight in 2942, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2942)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In scientometric research, the use of cooccurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this article, we theoretically analyze the properties of similarity measures for cooccurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely, set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that cooccurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.8, S.1635-1651
  3. Waltman, L.; Eck, N.J. van; Raan, A.F.J. van: Universality of citation distributions revisited (2012) 0.02
    0.015814548 = product of:
      0.047443643 = sum of:
        0.047443643 = weight(_text_:science in 4963) [ClassicSimilarity], result of:
          0.047443643 = score(doc=4963,freq=6.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.35285735 = fieldWeight in 4963, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4963)
      0.33333334 = coord(1/3)
    
    Abstract
    Radicchi, Fortunato, and Castellano (2008) claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.72-77
  4. Sjögårde, P.; Ahlgren, P.; Waltman, L.: Algorithmic labeling in hierarchical classifications of publications : evaluation of bibliographic fields and term weighting approaches (2021) 0.01
    0.013043619 = product of:
      0.039130855 = sum of:
        0.039130855 = weight(_text_:science in 261) [ClassicSimilarity], result of:
          0.039130855 = score(doc=261,freq=8.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2910318 = fieldWeight in 261, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=261)
      0.33333334 = coord(1/3)
    
    Abstract
    Algorithmic classifications of research publications can be used to study many different aspects of the science system, such as the organization of science into fields, the growth of fields, interdisciplinarity, and emerging topics. How to label the classes in these classifications is a problem that has not been thoroughly addressed in the literature. In this study, we evaluate different approaches to label the classes in algorithmically constructed classifications of research publications. We focus on two important choices: the choice of (a) different bibliographic fields and (b) different approaches to weight the relevance of terms. To evaluate the different choices, we created two baselines: one based on the Medical Subject Headings in MEDLINE and another based on the Science-Metrix journal classification. We tested to what extent different approaches yield the desired labels for the classes in the two baselines. Based on our results, we recommend extracting terms from titles and keywords to label classes at high levels of granularity (e.g., topics). At low levels of granularity (e.g., disciplines) we recommend extracting terms from journal names and author addresses. We recommend the use of a new approach, term frequency to specificity ratio, to calculate the relevance of terms.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.7, S.853-869
  5. Waltman, L.; Eck, N.J. van: ¬A new methodology for constructing a publication-level classification system of science : keyword maps in Google scholar citations (2012) 0.01
    0.011296105 = product of:
      0.033888314 = sum of:
        0.033888314 = weight(_text_:science in 511) [ClassicSimilarity], result of:
          0.033888314 = score(doc=511,freq=6.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.25204095 = fieldWeight in 511, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=511)
      0.33333334 = coord(1/3)
    
    Abstract
    Classifying journals or publications into research areas is an essential element of many bibliometric analyses. Classification usually takes place at the level of journals, where the Web of Science subject categories are the most popular classification system. However, journal-level classification systems have two important limitations: They offer only a limited amount of detail, and they have difficulties with multidisciplinary journals. To avoid these limitations, we introduce a new methodology for constructing classification systems at the level of individual publications. In the proposed methodology, publications are clustered into research areas based on citation relations. The methodology is able to deal with very large numbers of publications. We present an application in which a classification system is produced that includes almost 10 million publications. Based on an extensive analysis of this classification system, we discuss the strengths and the limitations of the proposed methodology. Important strengths are the transparency and relative simplicity of the methodology and its fairly modest computing and memory requirements. The main limitation of the methodology is its exclusive reliance on direct citation relations between publications. The accuracy of the methodology can probably be increased by also taking into account other types of relations-for instance, based on bibliographic coupling.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.12, S.2378-2392
  6. Waltman, L.; Schreiber, M.: On the calculation of percentile-based bibliometric indicators (2013) 0.01
    0.0110678775 = product of:
      0.03320363 = sum of:
        0.03320363 = weight(_text_:science in 616) [ClassicSimilarity], result of:
          0.03320363 = score(doc=616,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.24694869 = fieldWeight in 616, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=616)
      0.33333334 = coord(1/3)
    
    Abstract
    A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance, the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators have been proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.372-379
  7. Waltman, L.; Costas, R.: F1000 Recommendations as a potential new data source for research evaluation : a comparison with citations (2014) 0.01
    0.0110678775 = product of:
      0.03320363 = sum of:
        0.03320363 = weight(_text_:science in 1212) [ClassicSimilarity], result of:
          0.03320363 = score(doc=1212,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.24694869 = fieldWeight in 1212, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1212)
      0.33333334 = coord(1/3)
    
    Abstract
    F1000 is a postpublication peer review service for biological and medical research. F1000 recommends important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and more than 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations. More research is needed to identify the main reasons for differences between recommendations and citations in assessing the impact of publications.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.3, S.433-445
  8. Colavizza, G.; Boyack, K.W.; Eck, N.J. van; Waltman, L.: ¬The closer the better : similarity of publication pairs at different cocitation levels (2018) 0.01
    0.0110678775 = product of:
      0.03320363 = sum of:
        0.03320363 = weight(_text_:science in 4214) [ClassicSimilarity], result of:
          0.03320363 = score(doc=4214,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.24694869 = fieldWeight in 4214, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4214)
      0.33333334 = coord(1/3)
    
    Abstract
    We investigated the similarities of pairs of articles that are cocited at the different cocitation levels of the journal, article, section, paragraph, sentence, and bracket. Our results indicate that textual similarity, intellectual overlap (shared references), author overlap (shared authors), proximity in publication time all rise monotonically as the cocitation level gets lower (from journal to bracket). While the main gain in similarity happens when moving from journal to article cocitation, all level changes entail an increase in similarity, especially section to paragraph and paragraph to sentence/bracket levels. We compared the results from four journals over the years 2010-2015: Cell, the European Journal of Operational Research, Physics Letters B, and Research Policy, with consistent general outcomes and some interesting differences. Our findings motivate the use of granular cocitation information as defined by meaningful units of text, with implications for, among others, the elaboration of maps of science and the retrieval of scholarly literature.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.4, S.600-609
  9. Waltman, L.; Eck, N.J. van: Some comments on the question whether co-occurrence data should be normalized (2007) 0.01
    0.009130533 = product of:
      0.027391598 = sum of:
        0.027391598 = weight(_text_:science in 583) [ClassicSimilarity], result of:
          0.027391598 = score(doc=583,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 583, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=583)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.11, S.1701-1703
  10. Eck, N.J. van; Waltman, L.: Appropriate similarity measures for author co-citation analysis (2008) 0.01
    0.009130533 = product of:
      0.027391598 = sum of:
        0.027391598 = weight(_text_:science in 2008) [ClassicSimilarity], result of:
          0.027391598 = score(doc=2008,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 2008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2008)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.10, S.1653-1661
  11. Waltman, L.; Eck, N.J. van: ¬The relation between eigenfactor, audience factor, and influence weight (2010) 0.01
    0.009130533 = product of:
      0.027391598 = sum of:
        0.027391598 = weight(_text_:science in 3596) [ClassicSimilarity], result of:
          0.027391598 = score(doc=3596,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 3596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3596)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1476-1486
  12. Eck, N.J. van; Waltman, L.; Dekker, R.; Berg, J. van den: ¬A comparison of two techniques for bibliometric mapping : multidimensional scaling and VOS (2010) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 4112) [ClassicSimilarity], result of:
          0.023478512 = score(doc=4112,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 4112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4112)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2405-2416
  13. Waltman, L.; Calero-Medina, C.; Kosten, J.; Noyons, E.C.M.; Tijssen, R.J.W.; Eck, N.J. van; Leeuwen, T.N. van; Raan, A.F.J. van; Visser, M.S.; Wouters, P.: ¬The Leiden ranking 2011/2012 : data collection, indicators, and interpretation (2012) 0.01
    0.0065218094 = product of:
      0.019565428 = sum of:
        0.019565428 = weight(_text_:science in 514) [ClassicSimilarity], result of:
          0.019565428 = score(doc=514,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=514)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.12, S.2405-2418