Search (2 results, page 1 of 1)

  • × author_ss:"Schreiber, M."
  • × theme_ss:"Informetrie"
  1. Schreiber, M.: ¬An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index (2008) 0.01
    0.012673399 = product of:
      0.025346799 = sum of:
        0.025346799 = product of:
          0.050693598 = sum of:
            0.050693598 = weight(_text_:r in 1968) [ClassicSimilarity], result of:
              0.050693598 = score(doc=1968,freq=10.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.40890455 = fieldWeight in 1968, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1968)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    J.E. Hirsch (2005) introduced the h-index to quantify an individual's scientific research output by the largest number h of a scientist's papers that received at least h citations. To take into account the highly skewed frequency distribution of citations, L. Egghe (2006a) proposed the g-index as an improvement of the h-index. I have worked out 26 practical cases of physicists from the Institute of Physics at Chemnitz University of Technology, and compare the h and g values in this study. It is demonstrated that the g-index discriminates better between different citation patterns. This also can be achieved by evaluating B.H. Jin's (2006) A-index, which reflects the average number of citations in the h-core, and interpreting it in conjunction with the h-index. h and A can be combined into the R-index to measure the h-core's citation intensity. I also have determined the A and R values for the 26 datasets. For a better comparison, I utilize interpolated indices. The correlations between the various indices as well as with the total number of papers and the highest citation counts are discussed. The largest Pearson correlation coefficient is found between g and R. Although the correlation between g and h is relatively strong, the arrangement of the datasets is significantly different depending on whether they are put into order according to the values of either h or g.
    Object
    R-Index
  2. Schreiber, M.: Inconsistencies of recently proposed citation impact indicators and how to avoid them (2012) 0.01
    0.008015362 = product of:
      0.016030723 = sum of:
        0.016030723 = product of:
          0.032061446 = sum of:
            0.032061446 = weight(_text_:r in 459) [ClassicSimilarity], result of:
              0.032061446 = score(doc=459,freq=4.0), product of:
                0.12397416 = queryWeight, product of:
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.037451506 = queryNorm
                0.25861394 = fieldWeight in 459, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3102584 = idf(docFreq=4387, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=459)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is shown that under certain circumstances in particular for small data sets, the recently proposed citation impact indicators I3(6PR) and R(6,k) behave inconsistently when additional papers or citations are taken into consideration. Three simple examples are presented, in which the indicators fluctuate strongly and the ranking of scientists in the evaluated group is sometimes completely mixed up by minor changes in the database. The erratic behavior is traced to the specific way in which weights are attributed to the six percentile rank classes, specifically for the tied papers. For 100 percentile rank classes, the effects will be less serious. For the six classes, it is demonstrated that a different way of assigning weights avoids these problems, although the nonlinearity of the weights for the different percentile rank classes can still lead to (much less frequent) changes in the ranking. This behavior is not undesired because it can be used to correct for differences in citation behavior in different fields. Remaining deviations from the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule: the fractional scoring. Previously proposed consistency criteria are amended by another property of strict independence at which a performance indicator should aim.