Search (2 results, page 1 of 1)

  • × author_ss:"Chen, D.-Z."
  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Huang, M.-H.; Lin, C.-S.; Chen, D.-Z.: Counting methods, country rank changes, and counting inflation in the assessment of national research productivity and impact (2011) 0.01
    0.011991608 = product of:
      0.035974823 = sum of:
        0.035974823 = weight(_text_:based in 4942) [ClassicSimilarity], result of:
          0.035974823 = score(doc=4942,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 4942, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4942)
      0.33333334 = coord(1/3)
    
    Abstract
    The counting of papers and citations is fundamental to the assessment of research productivity and impact. In an age of increasing scientific collaboration across national borders, the counting of papers produced by collaboration between multiple countries, and citations of such papers, raises concerns in country-level research evaluation. In this study, we compared the number counts and country ranks resulting from five different counting methods. We also observed inflation depending on the method used. Using the 1989 to 2008 physics papers indexed in ISI's Web of Science as our sample, we analyzed the counting results in terms of paper count (research productivity) as well as citation count and citation-paper ratio (CP ratio) based evaluation (research impact). The results show that at the country-level assessment, the selection of counting method had only minor influence on the number counts and country rankings in each assessment. However, the influences of counting methods varied between paper count, citation count, and CP ratio based evaluation. The findings also suggest that the popular counting method (whole counting) that gives each collaborating country one full credit may not be the best counting method. Straight counting that accredits only the first or the corresponding author or fractional counting that accredits each collaborator with partial and weighted credit might be the better choices.
  2. Kuan, C.-H.; Huang, M.-H.; Chen, D.-Z.: ¬A two-dimensional approach to performance evaluation for a large number of research institutions (2012) 0.01
    0.010175217 = product of:
      0.03052565 = sum of:
        0.03052565 = weight(_text_:based in 58) [ClassicSimilarity], result of:
          0.03052565 = score(doc=58,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 58, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=58)
      0.33333334 = coord(1/3)
    
    Abstract
    We characterize the research performance of a large number of institutions in a two-dimensional coordinate system based on the shapes of their h-cores so that their relative performance can be conveniently observed and compared. The 2D distribution of these institutions is then utilized (1) to categorize the institutions into a number of qualitative groups revealing the nature of their performance, and (2) to determine the position of a specific institution among the set of institutions. The method is compared with some major h-type indices and tested with empirical data using clinical medicine as an illustrative case. The method is extensible to the research performance evaluation at other aggregation levels such as researchers, journals, departments, and nations.