Search (52 results, page 1 of 3)

  • × author_ss:"Egghe, L."
  1. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.03
    0.025220998 = product of:
      0.050441995 = sum of:
        0.01841403 = weight(_text_:for in 4992) [ClassicSimilarity], result of:
          0.01841403 = score(doc=4992,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 4992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.078125 = fieldNorm(doc=4992)
        0.032027967 = product of:
          0.064055935 = sum of:
            0.064055935 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.064055935 = score(doc=4992,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    14. 2.2012 12:53:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.429
  2. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.02
    0.0191766 = product of:
      0.0383532 = sum of:
        0.019136423 = weight(_text_:for in 2558) [ClassicSimilarity], result of:
          0.019136423 = score(doc=2558,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 2558, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.038433556 = score(doc=2558,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
  3. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.02
    0.015132599 = product of:
      0.030265197 = sum of:
        0.0110484185 = weight(_text_:for in 7659) [ClassicSimilarity], result of:
          0.0110484185 = score(doc=7659,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.12446466 = fieldWeight in 7659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.038433556 = score(doc=7659,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    It is possible, using ISI's Journal Citation Report (JCR), to calculate average impact factors (AIF) for LCR's subject categories but it can be more useful to know the global Impact Factor (GIF) of a subject category and compare the 2 values. Reports results of a study to compare the relationships between AIFs and GIFs of subjects, based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference being studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact factor of a field, and GQ becomes its global impact factor. Discusses a number of applications of this technique in the context of informetrics and scientometrics
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  4. Egghe, L.: New relations between similarity measures for vectors based on vector norms (2009) 0.01
    0.0067657465 = product of:
      0.027062986 = sum of:
        0.027062986 = weight(_text_:for in 2708) [ClassicSimilarity], result of:
          0.027062986 = score(doc=2708,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3048749 = fieldWeight in 2708, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=2708)
      0.25 = coord(1/4)
    
    Abstract
    The well-known similarity measures Jaccard, Salton's cosine, Dice, and several related overlap measures for vectors are compared. While general relations are not possible to prove, we study these measures on the trajectories of the form [X]=a[Y], where a > 0 is a constant and [·] denotes the Euclidean norm of a vector. In this case, direct functional relations between these measures are proved. For Jaccard, we prove that it is a convexly increasing function of Salton's cosine measure, but always smaller than or equal to the latter, hereby explaining a curve, experimentally found by Leydesdorff. All the other measures have a linear relation with Salton's cosine, reducing even to equality, in case a = 1. Hence, for equally normed vectors (e.g., for normalized vectors) we, essentially, only have Jaccard's measure and Salton's cosine measure since all the other measures are equal to the latter.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.2, S.232-239
  5. Egghe, L.: ¬A good normalized impact and concentration measure (2014) 0.01
    0.006510343 = product of:
      0.026041372 = sum of:
        0.026041372 = weight(_text_:for in 1508) [ClassicSimilarity], result of:
          0.026041372 = score(doc=1508,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29336601 = fieldWeight in 1508, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.078125 = fieldNorm(doc=1508)
      0.25 = coord(1/4)
    
    Abstract
    It is shown that a normalized version of the g-index is a good normalized impact and concentration measure. A proposal for such a measure by Bartolucci is improved.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.10, S.2052-2054
  6. Egghe, L.: ¬A rationale for the Hirsch-index rank-order distribution and a comparison with the impact factor rank-order distribution (2009) 0.01
    0.0064449105 = product of:
      0.025779642 = sum of:
        0.025779642 = weight(_text_:for in 3124) [ClassicSimilarity], result of:
          0.025779642 = score(doc=3124,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 3124, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3124)
      0.25 = coord(1/4)
    
    Abstract
    We present a rationale for the Hirsch-index rank-order distribution and prove that it is a power law (hence a straight line in the log-log scale). This is confirmed by experimental data of Pyykkö and by data produced in this article on 206 mathematics journals. This distribution is of a completely different nature than the impact factor (IF) rank-order distribution which (as proved in a previous article) is S-shaped. This is also confirmed by our example. Only in the log-log scale of the h-index distribution do we notice a concave deviation of the straight line for higher ranks. This phenomenon is discussed.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2142-2144
  7. Egghe, L.: Informetric explanation of some Leiden Ranking graphs (2014) 0.01
    0.0063788076 = product of:
      0.02551523 = sum of:
        0.02551523 = weight(_text_:for in 1236) [ClassicSimilarity], result of:
          0.02551523 = score(doc=1236,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.28743884 = fieldWeight in 1236, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=1236)
      0.25 = coord(1/4)
    
    Abstract
    The S-shaped functional relation between the mean citation score and the proportion of top 10% publications for the 500 Leiden Ranking universities is explained using results of the shifted Lotka function. Also the concave or convex relation between the proportion of top 100?% publications, for different fractions ?, is explained using the obtained new informetric model.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.737-741
  8. Egghe, L.; Ravichandra Rao, I.K.: Duality revisited : construction of fractional frequency distributions based on two dual Lotka laws (2002) 0.01
    0.0061762533 = product of:
      0.024705013 = sum of:
        0.024705013 = weight(_text_:for in 1006) [ClassicSimilarity], result of:
          0.024705013 = score(doc=1006,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27831143 = fieldWeight in 1006, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1006)
      0.25 = coord(1/4)
    
    Abstract
    Fractional frequency distributions of, for example, authors with a certain (fractional) number of papers are very irregular and, therefore, not easy to model or to explain. This article gives a first attempt to this by assuming two simple Lotka laws (with exponent 2): one for the number of authors with n papers (total count here) and one for the number of papers with n authors, n E N. Based an an earlier made convolution model of Egghe, interpreted and reworked now for discrete scores, we are able to produce theoretical fractional frequency distributions with only one parameter, which are in very close agreement with the practical ones as found in a large dataset produced earlier by Rao. The article also shows that (irregular) fractional frequency distributions are a consequence of Lotka's law, and are not examples of breakdowns of this famous historical law.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.10, S.789-801
  9. Egghe, L.; Rousseau, R.: ¬A measure for the cohesion of weighted networks (2003) 0.01
    0.005638122 = product of:
      0.022552488 = sum of:
        0.022552488 = weight(_text_:for in 5157) [ClassicSimilarity], result of:
          0.022552488 = score(doc=5157,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 5157, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5157)
      0.25 = coord(1/4)
    
    Abstract
    Measurement of the degree of interconnectedness in graph like networks of hyperlinks or citations can indicate the existence of research fields and assist in comparative evaluation of research efforts. In this issue we begin with Egghe and Rousseau who review compactness measures and investigate the compactness of a network as a weighted graph with dissimilarity values characterizing the arcs between nodes. They make use of a generalization of the Botofogo, Rivlin, Shneiderman, (BRS) compaction measure which treats the distance between unreachable nodes not as infinity but rather as the number of nodes in the network. The dissimilarity values are determined by summing the reciprocals of the weights of the arcs in the shortest chain between two nodes where no weight is smaller than one. The BRS measure is then the maximum value for the sum of the dissimilarity measures less the actual sum divided by the difference between the maximum and minimum. The Wiener index, the sum of all elements in the dissimilarity matrix divided by two, is then computed for Small's particle physics co-citation data as well as the BRS measure, the dissimilarity values and shortest paths. The compactness measure for the weighted network is smaller than for the un-weighted. When the bibliographic coupling network is utilized it is shown to be less compact than the co-citation network which indicates that the new measure produces results that confirm to an obvious case.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.3, S.193-202
  10. Egghe, L.: Vector retrieval, fuzzy retrieval and the universal fuzzy IR surface for IR evaluation (2004) 0.01
    0.0055814567 = product of:
      0.022325827 = sum of:
        0.022325827 = weight(_text_:for in 2531) [ClassicSimilarity], result of:
          0.022325827 = score(doc=2531,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 2531, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2531)
      0.25 = coord(1/4)
    
    Abstract
    It is shown that vector information retrieval (IR) and general fuzzy IR uses two types of fuzzy set operations: the original "Zadeh min-max operations" and the so-called "probabilistic sum and algebraic product operations". The universal IR surface, valid for classical 0-1 IR (i.e. where ordinary sets are used) and used in IR evaluation, is extended to and reproved for vector IR, using the probabilistic sum and algebraic product model. We also show (by counterexample) that, using the "Zadeh min-max" fuzzy model, yields a breakdown of this IR surface.
  11. Egghe, L.; Rousseau, R.; Rousseau, S.: TOP-curves (2007) 0.01
    0.0055814567 = product of:
      0.022325827 = sum of:
        0.022325827 = weight(_text_:for in 50) [ClassicSimilarity], result of:
          0.022325827 = score(doc=50,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 50, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
      0.25 = coord(1/4)
    
    Abstract
    Several characteristics of classical Lorenz curves make them unsuitable for the study of a group of topperformers. TOP-curves, defined as a kind of mirror image of TIP-curves used in poverty studies, are shown to possess the properties necessary for adequate empirical ranking of various data arrays, based on the properties of the highest performers (i.e., the core). TOP-curves and essential TOP-curves, also introduced in this article, simultaneously represent the incidence, intensity, and inequality among the top. It is shown that TOPdominance partial order, introduced in this article, is stronger than Lorenz dominance order. In this way, this article contributes to the study of cores, a central issue in applied informetrics.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.777-785
  12. Egghe, L.: ¬A new short proof of Naranan's theorem, explaining Lotka's law and Zipf's law (2010) 0.01
    0.0055814567 = product of:
      0.022325827 = sum of:
        0.022325827 = weight(_text_:for in 3432) [ClassicSimilarity], result of:
          0.022325827 = score(doc=3432,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 3432, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3432)
      0.25 = coord(1/4)
    
    Abstract
    Naranan's important theorem, published in Nature in 1970, states that if the number of journals grows exponentially and if the number of articles in each journal grows exponentially (at the same rate for each journal), then the system satisfies Lotka's law and a formula for the Lotka's exponent is given in function of the growth rates of the journals and the articles. This brief communication re-proves this result by showing that the system satisfies Zipf's law, which is equivalent with Lotka's law. The proof is short and algebraic and does not use infinitesimal arguments.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2581-2583
  13. Egghe, L.: Theory of the topical coverage of multiple databases (2013) 0.01
    0.0055814567 = product of:
      0.022325827 = sum of:
        0.022325827 = weight(_text_:for in 526) [ClassicSimilarity], result of:
          0.022325827 = score(doc=526,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 526, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=526)
      0.25 = coord(1/4)
    
    Abstract
    We present a model that describes which fraction of the literature on a certain topic we will find when we use n (n = 1, 2, .) databases. It is a generalization of the theory of discovering usability problems. We prove that, in all practical cases, this fraction is a concave function of n, the number of used databases, thereby explaining some graphs that exist in the literature. We also study limiting features of this fraction for n very high and we characterize the case that we find all literature on a certain topic for n high enough.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.126-131
  14. Egghe, L.: Special features of the author - publication relationship and a new explanation of Lotka's law based on convolution theory (1994) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 5068) [ClassicSimilarity], result of:
          0.022096837 = score(doc=5068,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 5068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=5068)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 45(1994) no.6, S.422-427
  15. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 4384) [ClassicSimilarity], result of:
          0.022096837 = score(doc=4384,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 4384, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4384)
      0.25 = coord(1/4)
    
    Abstract
    One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. Conseqeuntly, a ranking between countries, universities, research groups or authors, based on one particular accrediting methods does not contain an absolute truth about their relative importance
    Source
    Journal of the American Society for Information Science. 51(2000) no.2, S.145-157
  16. Egghe, L.: Zipfian and Lotkaian continuous concentration theory (2005) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 3678) [ClassicSimilarity], result of:
          0.022096837 = score(doc=3678,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 3678, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3678)
      0.25 = coord(1/4)
    
    Abstract
    In this article concentration (i.e., inequality) aspects of the functions of Zipf and of Lotka are studied. Since both functions are power laws (i.e., they are mathematically the same) it suffices to develop one concentration theory for power laws and apply it twice for the different interpretations of the laws of Zipf and Lotka. After a brief repetition of the functional relationships between Zipf's law and Lotka's law, we prove that Price's law of concentration is equivalent with Zipf's law. A major part of this article is devoted to the development of continuous concentration theory, based an Lorenz curves. The Lorenz curve for power functions is calculated and, based an this, some important concentration measures such as the ones of Gini, Theil, and the variation coefficient. Using Lorenz curves, it is shown that the concentration of a power law increases with its exponent and this result is interpreted in terms of the functions of Zipf and Lotka.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.9, S.935-945
  17. Egghe, L.; Ravichandra Rao, I.K.: Study of different h-indices for groups of authors (2008) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 1878) [ClassicSimilarity], result of:
          0.022096837 = score(doc=1878,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 1878, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=1878)
      0.25 = coord(1/4)
    
    Abstract
    In this article, for any group of authors, we define three different h-indices. First, there is the successive h-index h2 based on the ranked list of authors and their h-indices h1 as defined by Schubert (2007). Next, there is the h-index hP based on the ranked list of authors and their number of publications. Finally, there is the h-index hC based on the ranked list of authors and their number of citations. We present formulae for these three indices in Lotkaian informetrics from which it also follows that h2 < hp < hc. We give a concrete example of a group of 167 authors on the topic optical flow estimation. Besides these three h-indices, we also calculate the two-by-two Spearman rank correlation coefficient and prove that these rankings are significantly related.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1276-1281
  18. Egghe, L.; Guns, R.; Rousseau, R.: Thoughts on uncitedness : Nobel laureates and Fields medalists as case studies (2011) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 4994) [ClassicSimilarity], result of:
          0.022096837 = score(doc=4994,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 4994, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4994)
      0.25 = coord(1/4)
    
    Abstract
    Contrary to what one might expect, Nobel laureates and Fields medalists have a rather large fraction (10% or more) of uncited publications. This is the case for (in total) 75 examined researchers from the fields of mathematics (Fields medalists), physics, chemistry, and physiology or medicine (Nobel laureates). We study several indicators for these researchers, including the h-index, total number of publications, average number of citations per publication, the number (and fraction) of uncited publications, and their interrelations. The most remarkable result is a positive correlation between the h-index and the number of uncited articles. We also present a Lotkaian model, which partially explains the empirically found regularities.
    Footnote
    Vgl.: Erratum. In: Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.429.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.8, S.1637-1644
  19. Egghe, L.: Note on a possible decomposition of the h-Index (2013) 0.01
    0.0055242092 = product of:
      0.022096837 = sum of:
        0.022096837 = weight(_text_:for in 683) [ClassicSimilarity], result of:
          0.022096837 = score(doc=683,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=683)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.871
  20. Egghe, L.: On the law of Zipf-Mandelbrot for multi-word phrases (1999) 0.01
    0.0052082743 = product of:
      0.020833097 = sum of:
        0.020833097 = weight(_text_:for in 3058) [ClassicSimilarity], result of:
          0.020833097 = score(doc=3058,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.23469281 = fieldWeight in 3058, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=3058)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 50(1999) no.3, S.233-241