Search (4 results, page 1 of 1)

  • × author_ss:"Eck, N.J. van"
  1. Eck, N.J. van; Waltman, L.; Dekker, R.; Berg, J. van den: ¬A comparison of two techniques for bibliometric mapping : multidimensional scaling and VOS (2010) 0.06
    0.064961776 = product of:
      0.12992355 = sum of:
        0.12992355 = product of:
          0.2598471 = sum of:
            0.2598471 = weight(_text_:maps in 4112) [ClassicSimilarity], result of:
              0.2598471 = score(doc=4112,freq=12.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.9124516 = fieldWeight in 4112, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4112)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    VOS is a new mapping technique that can serve as an alternative to the well-known technique of multidimensional scaling (MDS). We present an extensive comparison between the use of MDS and the use of VOS for constructing bibliometric maps. In our theoretical analysis, we show the mathematical relation between the two techniques. In our empirical analysis, we use the techniques for constructing maps of authors, journals, and keywords. Two commonly used approaches to bibliometric mapping, both based on MDS, turn out to produce maps that suffer from artifacts. Maps constructed using VOS turn out not to have this problem. We conclude that in general maps constructed using VOS provide a more satisfactory representation of a dataset than maps constructed using well-known MDS approaches.
  2. Waltman, L.; Eck, N.J. van: Some comments on the question whether co-occurrence data should be normalized (2007) 0.03
    0.030940626 = product of:
      0.06188125 = sum of:
        0.06188125 = product of:
          0.1237625 = sum of:
            0.1237625 = weight(_text_:maps in 583) [ClassicSimilarity], result of:
              0.1237625 = score(doc=583,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.43459132 = fieldWeight in 583, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=583)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a recent article in JASIST, L. Leydesdorff and L. Vaughan (2006) asserted that raw cocitation data should be analyzed directly, without first applying a normalization such as the Pearson correlation. In this communication, it is argued that there is nothing wrong with the widely adopted practice of normalizing cocitation data. One of the arguments put forward by Leydesdorff and Vaughan turns out to depend crucially on incorrect multidimensional scaling maps that are due to an error in the PROXSCAL program in SPSS.
  3. Colavizza, G.; Boyack, K.W.; Eck, N.J. van; Waltman, L.: ¬The closer the better : similarity of publication pairs at different cocitation levels (2018) 0.03
    0.026520537 = product of:
      0.053041074 = sum of:
        0.053041074 = product of:
          0.10608215 = sum of:
            0.10608215 = weight(_text_:maps in 4214) [ClassicSimilarity], result of:
              0.10608215 = score(doc=4214,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.37250686 = fieldWeight in 4214, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4214)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We investigated the similarities of pairs of articles that are cocited at the different cocitation levels of the journal, article, section, paragraph, sentence, and bracket. Our results indicate that textual similarity, intellectual overlap (shared references), author overlap (shared authors), proximity in publication time all rise monotonically as the cocitation level gets lower (from journal to bracket). While the main gain in similarity happens when moving from journal to article cocitation, all level changes entail an increase in similarity, especially section to paragraph and paragraph to sentence/bracket levels. We compared the results from four journals over the years 2010-2015: Cell, the European Journal of Operational Research, Physics Letters B, and Research Policy, with consistent general outcomes and some interesting differences. Our findings motivate the use of granular cocitation information as defined by meaningful units of text, with implications for, among others, the elaboration of maps of science and the retrieval of scholarly literature.
  4. Waltman, L.; Eck, N.J. van: ¬A new methodology for constructing a publication-level classification system of science : keyword maps in Google scholar citations (2012) 0.02
    0.022100445 = product of:
      0.04420089 = sum of:
        0.04420089 = product of:
          0.08840178 = sum of:
            0.08840178 = weight(_text_:maps in 511) [ClassicSimilarity], result of:
              0.08840178 = score(doc=511,freq=2.0), product of:
                0.28477904 = queryWeight, product of:
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.050679237 = queryNorm
                0.31042236 = fieldWeight in 511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.619245 = idf(docFreq=435, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=511)
          0.5 = coord(1/2)
      0.5 = coord(1/2)