Search (87 results, page 5 of 5)

  • × author_ss:"Leydesdorff, L."
  1. Zhou, P.; Su, X.; Leydesdorff, L.: ¬A comparative study on communication structures of Chinese journals in the social sciences (2010) 0.00
    0.001577849 = product of:
      0.014200641 = sum of:
        0.014200641 = weight(_text_:of in 3580) [ClassicSimilarity], result of:
          0.014200641 = score(doc=3580,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 3580, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3580)
      0.11111111 = coord(1/9)
    
    Abstract
    We argue that the communication structures in the Chinese social sciences have not yet been sufficiently reformed. Citation patterns among Chinese domestic journals in three subject areas - political science and Marxism, library and information science, and economics - are compared with their counterparts internationally. Like their colleagues in the natural and life sciences, Chinese scholars in the social sciences provide fewer references to journal publications than their international counterparts; like their international colleagues, social scientists provide fewer references than natural sciences. The resulting citation networks, therefore, are sparse. Nevertheless, the citation structures clearly suggest that the Chinese social sciences are far less specialized in terms of disciplinary delineations than their international counterparts. Marxism studies are more established than political science in China. In terms of the impact of the Chinese political system on academic fields, disciplines closely related to the political system are less specialized than those weakly related. In the discussion section, we explore reasons that may cause the current stagnation and provide policy recommendations.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1360-1376
  2. Zhou, Q.; Leydesdorff, L.: ¬The normalization of occurrence and co-occurrence matrices in bibliometrics using Cosine similarities and Ochiai coefficients (2016) 0.00
    0.001577849 = product of:
      0.014200641 = sum of:
        0.014200641 = weight(_text_:of in 3161) [ClassicSimilarity], result of:
          0.014200641 = score(doc=3161,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 3161, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3161)
      0.11111111 = coord(1/9)
    
    Abstract
    We prove that Ochiai similarity of the co-occurrence matrix is equal to cosine similarity in the underlying occurrence matrix. Neither the cosine nor the Pearson correlation should be used for the normalization of co-occurrence matrices because the similarity is then normalized twice, and therefore overestimated; the Ochiai coefficient can be used instead. Results are shown using a small matrix (5 cases, 4 variables) for didactic reasons, and also Ahlgren et?al.'s (2003) co-occurrence matrix of 24 authors in library and information sciences. The overestimation is shown numerically and will be illustrated using multidimensional scaling and cluster dendograms. If the occurrence matrix is not available (such as in internet research or author cocitation analysis) using Ochiai for the normalization is preferable to using the cosine.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.11, S.2805-2814
  3. Shelton, R.D.; Leydesdorff, L.: Publish or patent : bibliometric evidence for empirical trade-offs in national funding strategies (2012) 0.00
    0.0015557801 = product of:
      0.0140020205 = sum of:
        0.0140020205 = weight(_text_:of in 70) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=70,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 70, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=70)
      0.11111111 = coord(1/9)
    
    Abstract
    Multivariate linear regression models suggest a trade-off in allocations of national research and development (R&D). Government funding and spending in the higher education sector encourage publications as a long-term research benefit. Conversely, other components such as industrial funding and spending in the business sector encourage patenting. Our results help explain why the United States trails the European Union in publications: The focus in the United States is on industrial funding-some 70% of its total R&D investment. Likewise, our results also help explain why the European Union trails the United States in patenting, since its focus on government funding is less effective than industrial funding in predicting triadic patenting. Government funding contributes negatively to patenting in a multiple regression, and this relationship is significant in the case of triadic patenting. We provide new forecasts about the relationships of the United States, the European Union, and China for publishing; these results suggest much later dates for changes than previous forecasts because Chinese growth has been slowing down since 2003. Models for individual countries might be more successful than regression models whose parameters are averaged over a set of countries because nations can be expected to differ historically in terms of the institutional arrangements and funding schemes.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.3, S.498-511
  4. Leydesdorff, L.; Nerghes, A.: Co-word maps and topic modeling : a comparison using small and medium-sized corpora (N?<?1.000) (2017) 0.00
    0.0015557801 = product of:
      0.0140020205 = sum of:
        0.0140020205 = weight(_text_:of in 3538) [ClassicSimilarity], result of:
          0.0140020205 = score(doc=3538,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.22855641 = fieldWeight in 3538, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3538)
      0.11111111 = coord(1/9)
    
    Abstract
    Induced by "big data," "topic modeling" has become an attractive alternative to mapping co-words in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument ("The Leiden Manifesto") and then upscale to a sample of moderate size (n?=?687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.1024-1035
  5. Leydesdorff, L.: Should co-occurrence data be normalized : a rejoinder (2007) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 627) [ClassicSimilarity], result of:
          0.012701439 = score(doc=627,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=627)
      0.11111111 = coord(1/9)
    
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2411-2413
  6. Bornmann, L.; Leydesdorff, L.: Statistical tests and research assessments : a comment on Schneider (2012) (2013) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 752) [ClassicSimilarity], result of:
          0.012701439 = score(doc=752,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=752)
      0.11111111 = coord(1/9)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1306-1308
  7. Leydesdorff, L.; Wagner, C,; Bornmann, L.: Replicability and the public/private divide (2016) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 3023) [ClassicSimilarity], result of:
          0.012701439 = score(doc=3023,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 3023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=3023)
      0.11111111 = coord(1/9)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1777-1778