Search (8 results, page 1 of 1)

  • × author_ss:"Leydesdorff, L."
  • × year_i:[2000 TO 2010}
  1. Leydesdorff, L.; Sun, Y.: National and international dimensions of the Triple Helix in Japan : university-industry-government versus international coauthorship relations (2009) 0.05
    0.045634005 = product of:
      0.09126801 = sum of:
        0.09126801 = sum of:
          0.048878662 = weight(_text_:data in 2761) [ClassicSimilarity], result of:
            0.048878662 = score(doc=2761,freq=4.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.29644224 = fieldWeight in 2761, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=2761)
          0.04238935 = weight(_text_:22 in 2761) [ClassicSimilarity], result of:
            0.04238935 = score(doc=2761,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.23214069 = fieldWeight in 2761, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2761)
      0.5 = coord(1/2)
    
    Abstract
    International co-authorship relations and university-industry-government (Triple Helix) relations have hitherto been studied separately. Using Japanese publication data for the 1981-2004 period, we were able to study both kinds of relations in a single design. In the Japanese file, 1,277,030 articles with at least one Japanese address were attributed to the three sectors, and we know additionally whether these papers were coauthored internationally. Using the mutual information in three and four dimensions, respectively, we show that the Japanese Triple-Helix system has been continuously eroded at the national level. However, since the mid-1990s, international coauthorship relations have contributed to a reduction of the uncertainty at the national level. In other words, the national publication system of Japan has developed a capacity to retain surplus value generated internationally. In a final section, we compare these results with an analysis based on similar data for Canada. A relative uncoupling of national university-industry-government relations because of international collaborations is indicated in both countries.
    Date
    22. 3.2009 19:07:20
  2. Leydesdorff, L.: Can networks of journal-journal citations be used as indicators of change in the social sciences? (2003) 0.04
    0.03847589 = product of:
      0.07695178 = sum of:
        0.07695178 = sum of:
          0.03456243 = weight(_text_:data in 4460) [ClassicSimilarity], result of:
            0.03456243 = score(doc=4460,freq=2.0), product of:
              0.16488427 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.052144732 = queryNorm
              0.2096163 = fieldWeight in 4460, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=4460)
          0.04238935 = weight(_text_:22 in 4460) [ClassicSimilarity], result of:
            0.04238935 = score(doc=4460,freq=2.0), product of:
              0.18260197 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052144732 = queryNorm
              0.23214069 = fieldWeight in 4460, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4460)
      0.5 = coord(1/2)
    
    Abstract
    Aggregated journal-journal citations can be used for mapping the intellectual organization of the sciences in terms of specialties because the latter can be considered as interreading communities. Can the journal-journal citations also be used as early indicators of change by comparing the files for two subsequent years? Probabilistic entropy measures enable us to analyze changes in large datasets at different levels of aggregation and in considerable detail. Compares Journal Citation Reports of the Social Science Citation Index for 1999 with similar data for 1998 and analyzes the differences using these measures. Compares the various indicators with similar developments in the Science Citation Index. Specialty formation seems a more important mechanism in the development of the social sciences than in the natural and life sciences, but the developments in the social sciences are volatile. The use of aggregate statistics based on the Science Citation Index is ill-advised in the case of the social sciences because of structural differences in the underlying dynamics.
    Date
    6.11.2005 19:02:22
  3. Leydesdorff, L.: Should co-occurrence data be normalized : a rejoinder (2007) 0.02
    0.017281216 = product of:
      0.03456243 = sum of:
        0.03456243 = product of:
          0.06912486 = sum of:
            0.06912486 = weight(_text_:data in 627) [ClassicSimilarity], result of:
              0.06912486 = score(doc=627,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.4192326 = fieldWeight in 627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.09375 = fieldNorm(doc=627)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Leydesdorff, L.; Vaughan, L.: Co-occurrence matrices and their applications in information science : extending ACA to the Web environment (2006) 0.01
    0.014401014 = product of:
      0.028802028 = sum of:
        0.028802028 = product of:
          0.057604056 = sum of:
            0.057604056 = weight(_text_:data in 6113) [ClassicSimilarity], result of:
              0.057604056 = score(doc=6113,freq=8.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.34936053 = fieldWeight in 6113, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6113)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Co-occurrence matrices, such as cocitation, coword, and colink matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of these data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This article discusses the difference between a symmetrical cocitation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (such as the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical cocitation matrix but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment, in which the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed by using both the traditional methods of multivariate analysis and the new visualization software Pajek, which is based on social network analysis and graph theory.
  5. Leydesdorff, L.; Bensman, S.: Classification and Powerlaws : the logarithmic transformation (2006) 0.01
    0.012219666 = product of:
      0.024439331 = sum of:
        0.024439331 = product of:
          0.048878662 = sum of:
            0.048878662 = weight(_text_:data in 6007) [ClassicSimilarity], result of:
              0.048878662 = score(doc=6007,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.29644224 = fieldWeight in 6007, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6007)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Logarithmic transformation of the data has been recommended by the literature in the case of highly skewed distributions such as those commonly found in information science. The purpose of the transformation is to make the data conform to the lognormal law of error for inferential purposes. How does this transformation affect the analysis? We factor analyze and visualize the citation environment of the Journal of the American Chemical Society (JACS) before and after a logarithmic transformation. The transformation strongly reduces the variance necessary for classificatory purposes and therefore is counterproductive to the purposes of the descriptive statistics. We recommend against the logarithmic transformation when sets cannot be defined unambiguously. The intellectual organization of the sciences is reflected in the curvilinear parts of the citation distributions while negative powerlaws fit excellently to the tails of the distributions.
  6. Leydesdorff, L.: On the normalization and visualization of author co-citation data : Salton's Cosine versus the Jaccard index (2008) 0.01
    0.012219666 = product of:
      0.024439331 = sum of:
        0.024439331 = product of:
          0.048878662 = sum of:
            0.048878662 = weight(_text_:data in 1341) [ClassicSimilarity], result of:
              0.048878662 = score(doc=1341,freq=4.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.29644224 = fieldWeight in 1341, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1341)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The debate about which similarity measure one should use for the normalization in the case of Author Co-citation Analysis (ACA) is further complicated when one distinguishes between the symmetrical co-citation - or, more generally, co-occurrence - matrix and the underlying asymmetrical citation - occurrence - matrix. In the Web environment, the approach of retrieving original citation data is often not feasible. In that case, one should use the Jaccard index, but preferentially after adding the number of total citations (i.e., occurrences) on the main diagonal. Unlike Salton's cosine and the Pearson correlation, the Jaccard index abstracts from the shape of the distributions and focuses only on the intersection and the sum of the two sets. Since the correlations in the co-occurrence matrix may be spurious, this property of the Jaccard index can be considered as an advantage in this case.
  7. Leydesdorff, L.: ¬The construction and globalization of the knowledge base in inter-human communication systems (2003) 0.01
    0.010597337 = product of:
      0.021194674 = sum of:
        0.021194674 = product of:
          0.04238935 = sum of:
            0.04238935 = weight(_text_:22 in 1621) [ClassicSimilarity], result of:
              0.04238935 = score(doc=1621,freq=2.0), product of:
                0.18260197 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052144732 = queryNorm
                0.23214069 = fieldWeight in 1621, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1621)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.2003 19:48:04
  8. Leydesdorff, L.: Similarity measures, author cocitation Analysis, and information theory (2005) 0.01
    0.010080709 = product of:
      0.020161418 = sum of:
        0.020161418 = product of:
          0.040322836 = sum of:
            0.040322836 = weight(_text_:data in 3471) [ClassicSimilarity], result of:
              0.040322836 = score(doc=3471,freq=2.0), product of:
                0.16488427 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.052144732 = queryNorm
                0.24455236 = fieldWeight in 3471, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3471)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of Pearson's correlation coefficient in Author Cocitation Analysis was compared with Salton's cosine measure in a number of recent contributions. Unlike the Pearson correlation, the cosine is insensitive to the number of zeros. However, one has the option of applying a logarithmic transformation in correlation analysis. Information caiculus is based an both the logarithmic transformation and provides a non-parametric statistics. Using this methodology, one can cluster a document set in a precise way and express the differences in terms of bits of information. The algorithm is explained and used an the data set, which was made the subject of this discussion.