Search (9 results, page 1 of 1)

  • × author_ss:"Ding, Y."
  • × theme_ss:"Informetrie"
  1. Ding, Y.: Applying weighted PageRank to author citation networks (2011) 0.03
    0.029208332 = product of:
      0.058416665 = sum of:
        0.036211025 = weight(_text_:data in 4188) [ClassicSimilarity], result of:
          0.036211025 = score(doc=4188,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 4188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4188)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 4188) [ClassicSimilarity], result of:
              0.044411276 = score(doc=4188,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 4188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4188)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956-2008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures
    Date
    22. 1.2011 13:02:21
  2. Song, M.; Kim, S.Y.; Zhang, G.; Ding, Y.; Chambers, T.: Productivity and influence in bioinformatics : a bibliometric analysis using PubMed central (2014) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 1202) [ClassicSimilarity], result of:
          0.053759433 = score(doc=1202,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 1202, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.25 = coord(1/4)
    
    Abstract
    Bioinformatics is a fast-growing field based on the optimal use of "big data" gathered in genomic, proteomics, and functional genomics research. In this paper, we conduct a comprehensive and in-depth bibliometric analysis of the field of bioinformatics by extracting citation data from PubMed Central full-text. Citation data for the period 2000 to 2011, comprising 20,869 papers with 546,245 citations, was used to evaluate the productivity and influence of this emerging field. Four measures were used to identify productivity; most productive authors, most productive countries, most productive organizations, and most popular subject terms. Research impact was analyzed based on the measures of most cited papers, most cited authors, emerging stars, and leading organizations. Results show the overall trends between the periods 2000 to 2003 and 2004 to 2007 were dissimilar, while trends between the periods 2004 to 2007 and 2008 to 2011 were similar. In addition, the field of bioinformatics has undergone a significant shift, co-evolving with other biomedical disciplines.
  3. Ding, Y.; Chowdhury, G.C.; Foo, S.: Bibliometric cartography of information retrieval research by using co-word analysis (2001) 0.01
    0.012717713 = product of:
      0.05087085 = sum of:
        0.05087085 = product of:
          0.1017417 = sum of:
            0.1017417 = weight(_text_:processing in 6487) [ClassicSimilarity], result of:
              0.1017417 = score(doc=6487,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.53671354 = fieldWeight in 6487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6487)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 37(2001) no.6, S.817-842
  4. Ding, Y.: Visualization of intellectual structure in information retrieval : author cocitation analysis (1998) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 2792) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2792,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2792)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a cocitation analysis study from the international retrieval research field from 1987 to 1997. Data was taken from Social SciSearch, via Dialog, and the top 40 authors were submitted to author cocitation analysis to yield the intellectual structure of information retrieval. The resulting multidimensional scaling map revealed: identifiable author groups for information retrieval; location of these groups with respect to each other; extend of centrality and peripherality of authors within groups, proximities of authors within groups and across group boundaries; and the meaning of the axes of the map. Factor analysis was used to reveal the extent of the authors' research areas and statistical routines included: ALSCAL; clustering analysis and factor analysis
  5. Yan, E.; Ding, Y.: Applying centrality measures to impact analysis : a coauthorship network analysis (2009) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 3083) [ClassicSimilarity], result of:
          0.036211025 = score(doc=3083,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 3083, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3083)
      0.25 = coord(1/4)
    
    Abstract
    Many studies on coauthorship networks focus on network topology and network statistical mechanics. This article takes a different approach by studying micro-level network properties with the aim of applying centrality measures to impact analysis. Using coauthorship data from 16 journals in the field of library and information science (LIS) with a time span of 20 years (1988-2007), we construct an evolving coauthorship network and calculate four centrality measures (closeness centrality, betweenness centrality, degree centrality, and PageRank) for authors in this network. We find that the four centrality measures are significantly correlated with citation counts. We also discuss the usability of centrality measures in author ranking and suggest that centrality measures can be useful indicators for impact analysis.
  6. Yan, E.; Ding, Y.: Discovering author impact : a PageRank perspective (2011) 0.01
    0.008478476 = product of:
      0.033913903 = sum of:
        0.033913903 = product of:
          0.067827806 = sum of:
            0.067827806 = weight(_text_:processing in 2704) [ClassicSimilarity], result of:
              0.067827806 = score(doc=2704,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.35780904 = fieldWeight in 2704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2704)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 47(2011) no.1, S.125-134
  7. Ni, C.; Shaw, D.; Lind, S.M.; Ding, Y.: Journal impact and proximity : an assessment using bibliographic features (2013) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 686) [ClassicSimilarity], result of:
          0.031038022 = score(doc=686,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 686, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=686)
      0.25 = coord(1/4)
    
    Abstract
    Journals in the Information Science & Library Science category of Journal Citation Reports (JCR) were compared using both bibliometric and bibliographic features. Data collected covered journal impact factor (JIF), number of issues per year, number of authors per article, longevity, editorial board membership, frequency of publication, number of databases indexing the journal, number of aggregators providing full-text access, country of publication, JCR categories, Dewey decimal classification, and journal statement of scope. Three features significantly correlated with JIF: number of editorial board members and number of JCR categories in which a journal is listed correlated positively; journal longevity correlated negatively with JIF. Coword analysis of journal descriptions provided a proximity clustering of journals, which differed considerably from the clusters based on editorial board membership. Finally, a multiple linear regression model was built to predict the JIF based on all the collected bibliographic features.
  8. Li, R.; Chambers, T.; Ding, Y.; Zhang, G.; Meng, L.: Patent citation analysis : calculating science linkage based on citing motivation (2014) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 1257) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1257,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1257)
      0.25 = coord(1/4)
    
    Abstract
    Science linkage is a widely used patent bibliometric indicator to measure patent linkage to scientific research based on the frequency of citations to scientific papers within the patent. Science linkage is also regarded as noisy because the subject of patent citation behavior varies from inventors/applicants to examiners. In order to identify and ultimately reduce this noise, we analyzed the different citing motivations of examiners and inventors/applicants. We built 4 hypotheses based upon our study of patent law, the unique economic nature of a patent, and a patent citation's market effect. To test our hypotheses, we conducted an expert survey based on our science linkage calculation in the domain of catalyst from U.S. patent data (2006-2009) over 3 types of citations: self-citation by inventor/applicant, non-self-citation by inventor/applicant, and citation by examiner. According to our results, evaluated by domain experts, we conclude that the non-self-citation by inventor/applicant is quite noisy and cannot indicate science linkage and that self-citation by inventor/applicant, although limited, is more appropriate for understanding science linkage.
  9. Lu, C.; Zhang, Y.; Ahn, Y.-Y.; Ding, Y.; Zhang, C.; Ma, D.: Co-contributorship network and division of labor in individual scientific collaborations (2020) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 5963) [ClassicSimilarity], result of:
          0.02586502 = score(doc=5963,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 5963, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5963)
      0.25 = coord(1/4)
    
    Abstract
    Collaborations are pervasive in current science. Collaborations have been studied and encouraged in many disciplines. However, little is known about how a team really functions from the detailed division of labor within. In this research, we investigate the patterns of scientific collaboration and division of labor within individual scholarly articles by analyzing their co-contributorship networks. Co-contributorship networks are constructed by performing the one-mode projection of the author-task bipartite networks obtained from 138,787 articles published in PLoS journals. Given an article, we define 3 types of contributors: Specialists, Team-players, and Versatiles. Specialists are those who contribute to all their tasks alone; team-players are those who contribute to every task with other collaborators; and versatiles are those who do both. We find that team-players are the majority and they tend to contribute to the 5 most common tasks as expected, such as "data analysis" and "performing experiments." The specialists and versatiles are more prevalent than expected by our designed 2 null models. Versatiles tend to be senior authors associated with funding and supervision. Specialists are associated with 2 contrasting roles: the supervising role as team leaders or marginal and specialized contributors.