Search (8 results, page 1 of 1)

  • × author_ss:"Schreiber, M."
  • × year_i:[2010 TO 2020}
  1. Schreiber, M.: Uncertainties and ambiguities in percentiles and how to avoid them (2013) 0.01
    0.009459657 = product of:
      0.037838627 = sum of:
        0.0154592255 = weight(_text_:of in 675) [ClassicSimilarity], result of:
          0.0154592255 = score(doc=675,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.23940048 = fieldWeight in 675, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=675)
        0.0223794 = product of:
          0.0447588 = sum of:
            0.0447588 = weight(_text_:22 in 675) [ClassicSimilarity], result of:
              0.0447588 = score(doc=675,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.30952093 = fieldWeight in 675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=675)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The recently proposed fractional scoring scheme is used to attribute publications to percentile rank classes. It is shown that in this way uncertainties and ambiguities in the evaluation of specific quantile values and percentile ranks do not occur. Using the fractional scoring the total score of all papers exactly reproduces the theoretical value.
    Date
    22. 3.2013 19:52:05
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.3, S.640-643
  2. Waltman, L.; Schreiber, M.: On the calculation of percentile-based bibliometric indicators (2013) 0.01
    0.0081381425 = product of:
      0.03255257 = sum of:
        0.023188837 = weight(_text_:of in 616) [ClassicSimilarity], result of:
          0.023188837 = score(doc=616,freq=24.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3591007 = fieldWeight in 616, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=616)
        0.009363732 = product of:
          0.018727465 = sum of:
            0.018727465 = weight(_text_:on in 616) [ClassicSimilarity], result of:
              0.018727465 = score(doc=616,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.20619515 = fieldWeight in 616, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance, the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators have been proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.372-379
  3. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.01
    0.0071964967 = product of:
      0.028785987 = sum of:
        0.019957775 = weight(_text_:of in 1563) [ClassicSimilarity], result of:
          0.019957775 = score(doc=1563,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3090647 = fieldWeight in 1563, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1563)
        0.008828212 = product of:
          0.017656423 = sum of:
            0.017656423 = weight(_text_:on in 1563) [ClassicSimilarity], result of:
              0.017656423 = score(doc=1563,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19440265 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
  4. Schreiber, M.: ¬A variant of the h-index to measure recent performance (2015) 0.01
    0.0062969346 = product of:
      0.025187738 = sum of:
        0.017463053 = weight(_text_:of in 2262) [ClassicSimilarity], result of:
          0.017463053 = score(doc=2262,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 2262, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2262)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 2262) [ClassicSimilarity], result of:
              0.01544937 = score(doc=2262,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 2262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2262)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The predictive power of the h-index has been shown to depend on citations to rather old publications. This has raised doubts about its usefulness for predicting future scientific achievements. Here, I investigate a variant that considers only recent publications and is therefore more useful in academic hiring processes and for the allocation of research resources. It is simply defined in analogy to the usual h-index, but takes into account only publications from recent years, and it can easily be determined from the ISI Web of Knowledge.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.11, S.2373-2380
  5. Schreiber, M.: Do we need the g-index? (2013) 0.00
    0.0027328306 = product of:
      0.021862645 = sum of:
        0.021862645 = weight(_text_:of in 1113) [ClassicSimilarity], result of:
          0.021862645 = score(doc=1113,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.33856338 = fieldWeight in 1113, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1113)
      0.125 = coord(1/8)
    
    Abstract
    Using a very small sample of 8 data sets it was recently shown by De Visscher (2011) that the g-index is very close to the square root of the total number of citations. It was argued that there is no bibliometrically meaningful difference. Using another somewhat larger empirical sample of 26 data sets I show that the difference may be larger and I argue in favor of the g-index.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2396-2399
  6. Schreiber, M.: Empirical evidence for the relevance of fractional scoring in the calculation of percentile rank scores (2013) 0.00
    0.0025828204 = product of:
      0.020662563 = sum of:
        0.020662563 = weight(_text_:of in 640) [ClassicSimilarity], result of:
          0.020662563 = score(doc=640,freq=14.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.31997898 = fieldWeight in 640, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=640)
      0.125 = coord(1/8)
    
    Abstract
    Fractional scoring has been proposed to avoid inconsistencies in the attribution of publications to percentile rank classes. Uncertainties and ambiguities in the evaluation of percentile ranks can be demonstrated most easily with small data sets. But for larger data sets, an often large number of papers with the same citation count leads to the same uncertainties and ambiguities, which can be avoided by fractional scoring, demonstrated by four different empirical data sets with several thousand publications each, which are assigned to six percentile rank classes. Only by utilizing fractional scoring does, the total score of all papers exactly reproduce the theoretical value in each case.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.861-867
  7. Schreiber, M.: Inconsistencies in the highly cited publications indicator (2013) 0.00
    0.002494722 = product of:
      0.019957775 = sum of:
        0.019957775 = weight(_text_:of in 815) [ClassicSimilarity], result of:
          0.019957775 = score(doc=815,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3090647 = fieldWeight in 815, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=815)
      0.125 = coord(1/8)
    
    Abstract
    One way of evaluating individual scientists is the determination of the number of highly cited publications, where the threshold is given by a large reference set. It is shown that this indicator behaves in a counterintuitive way, leading to inconsistencies in the ranking of different scientists.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1298-1302
  8. Schreiber, M.: Inconsistencies of recently proposed citation impact indicators and how to avoid them (2012) 0.00
    0.0017080192 = product of:
      0.013664153 = sum of:
        0.013664153 = weight(_text_:of in 459) [ClassicSimilarity], result of:
          0.013664153 = score(doc=459,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.21160212 = fieldWeight in 459, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=459)
      0.125 = coord(1/8)
    
    Abstract
    It is shown that under certain circumstances in particular for small data sets, the recently proposed citation impact indicators I3(6PR) and R(6,k) behave inconsistently when additional papers or citations are taken into consideration. Three simple examples are presented, in which the indicators fluctuate strongly and the ranking of scientists in the evaluated group is sometimes completely mixed up by minor changes in the database. The erratic behavior is traced to the specific way in which weights are attributed to the six percentile rank classes, specifically for the tied papers. For 100 percentile rank classes, the effects will be less serious. For the six classes, it is demonstrated that a different way of assigning weights avoids these problems, although the nonlinearity of the weights for the different percentile rank classes can still lead to (much less frequent) changes in the ranking. This behavior is not undesired because it can be used to correct for differences in citation behavior in different fields. Remaining deviations from the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule: the fractional scoring. Previously proposed consistency criteria are amended by another property of strict independence at which a performance indicator should aim.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.10, S.2062-2073

Types