Search (87 results, page 5 of 5)

  • × author_ss:"Leydesdorff, L."
  1. Leydesdorff, L.; Zhou, P.: Co-word analysis using the Chinese character set (2008) 0.00
    0.0021828816 = product of:
      0.017463053 = sum of:
        0.017463053 = weight(_text_:of in 1970) [ClassicSimilarity], result of:
          0.017463053 = score(doc=1970,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 1970, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1970)
      0.125 = coord(1/8)
    
    Abstract
    Until recently, Chinese texts could not be studied using co-word analysis because the words are not separated by spaces in Chinese (and Japanese). A word can be composed of one or more characters. The online availability of programs that separate Chinese texts makes it possible to analyze them using semantic maps. Chinese characters contain not only information but also meaning. This may enhance the readability of semantic maps. In this study, we analyze 58 words which occur 10 or more times in the 1,652 journal titles of the China Scientific and Technical Papers and Citations Database. The word-occurrence matrix is visualized and factor-analyzed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1528-1530
  2. Leydesdorff, L.: How are new citation-based journal indicators adding to the bibliometric toolbox? (2009) 0.00
    0.0020496228 = product of:
      0.016396983 = sum of:
        0.016396983 = weight(_text_:of in 2929) [ClassicSimilarity], result of:
          0.016396983 = score(doc=2929,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.25392252 = fieldWeight in 2929, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2929)
      0.125 = coord(1/8)
    
    Abstract
    The launching of Scopus and Google Scholar, and methodological developments in social-network analysis have made many more indicators for evaluating journals available than the traditional impact factor, cited half-life, and immediacy index of the ISI. In this study, these new indicators are compared with one another and with the older ones. Do the various indicators measure new dimensions of the citation networks, or are they highly correlated among themselves? Are they robust and relatively stable over time? Two main dimensions are distinguished - size and impact - which together shape influence. The h-index combines the two dimensions and can also be considered as an indicator of reach (like Indegree). PageRank is mainly an indicator of size, but has important interactions with centrality measures. The Scimago Journal Ranking (SJR) indicator provides an alternative to the journal impact factor, but the computation is less easy.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.7, S.1327-1336
  3. Leydesdorff, L.: ¬A sociological theory of communication : the self-organization of the knowledge-based society (2001) 0.00
    0.0018710414 = product of:
      0.014968331 = sum of:
        0.014968331 = weight(_text_:of in 184) [ClassicSimilarity], result of:
          0.014968331 = score(doc=184,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23179851 = fieldWeight in 184, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=184)
      0.125 = coord(1/8)
    
    Footnote
    Rez. in: JASIST 53(2002) no.1, S.61-62 (E.G. Ackermann): "This brief summary cannot do justice to the intellectual depth, philosophical richness of the theoretical models, and their implications presented by Leydesdorff in his book. Next to this, the caveats presented earlier in this review are relatively minor. For all that, this book is not an "easy" read, nor is it for the theoretically or philosophically faint of heart. The content is certainly accessible to those with the interest and the stamina to see it through to the end, and would repay those who reread it with further insight and understanding. This book is recommended especially for the reader who is looking for a well-developed, general sociological theory of communication with a strong philosophical basis upon which to build a postmodern, deconstructionist research methodology"
  4. Zhou, Q.; Leydesdorff, L.: ¬The normalization of occurrence and co-occurrence matrices in bibliometrics using Cosine similarities and Ochiai coefficients (2016) 0.00
    0.0018710414 = product of:
      0.014968331 = sum of:
        0.014968331 = weight(_text_:of in 3161) [ClassicSimilarity], result of:
          0.014968331 = score(doc=3161,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23179851 = fieldWeight in 3161, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3161)
      0.125 = coord(1/8)
    
    Abstract
    We prove that Ochiai similarity of the co-occurrence matrix is equal to cosine similarity in the underlying occurrence matrix. Neither the cosine nor the Pearson correlation should be used for the normalization of co-occurrence matrices because the similarity is then normalized twice, and therefore overestimated; the Ochiai coefficient can be used instead. Results are shown using a small matrix (5 cases, 4 variables) for didactic reasons, and also Ahlgren et?al.'s (2003) co-occurrence matrix of 24 authors in library and information sciences. The overestimation is shown numerically and will be illustrated using multidimensional scaling and cluster dendograms. If the occurrence matrix is not available (such as in internet research or author cocitation analysis) using Ochiai for the normalization is preferable to using the cosine.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.11, S.2805-2814
  5. Leydesdorff, L.; Nerghes, A.: Co-word maps and topic modeling : a comparison using small and medium-sized corpora (N?<?1.000) (2017) 0.00
    0.0018448716 = product of:
      0.014758972 = sum of:
        0.014758972 = weight(_text_:of in 3538) [ClassicSimilarity], result of:
          0.014758972 = score(doc=3538,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.22855641 = fieldWeight in 3538, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3538)
      0.125 = coord(1/8)
    
    Abstract
    Induced by "big data," "topic modeling" has become an attractive alternative to mapping co-words in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument ("The Leiden Manifesto") and then upscale to a sample of moderate size (n?=?687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1024-1035
  6. Leydesdorff, L.: Should co-occurrence data be normalized : a rejoinder (2007) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 627) [ClassicSimilarity], result of:
          0.013388081 = score(doc=627,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=627)
      0.125 = coord(1/8)
    
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2411-2413
  7. Leydesdorff, L.; Wagner, C,; Bornmann, L.: Replicability and the public/private divide (2016) 0.00
    0.0016735102 = product of:
      0.013388081 = sum of:
        0.013388081 = weight(_text_:of in 3023) [ClassicSimilarity], result of:
          0.013388081 = score(doc=3023,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.20732689 = fieldWeight in 3023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=3023)
      0.125 = coord(1/8)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1777-1778