Search (19 results, page 1 of 1)

  • × author_ss:"Egghe, L."
  1. Egghe, L.; Rousseau, R.: Introduction to informetrics : quantitative methods in library, documentation and information science (1990) 0.03
    0.027746826 = product of:
      0.08324048 = sum of:
        0.018680464 = product of:
          0.03736093 = sum of:
            0.03736093 = weight(_text_:29 in 1515) [ClassicSimilarity], result of:
              0.03736093 = score(doc=1515,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27205724 = fieldWeight in 1515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
        0.06456002 = product of:
          0.12912004 = sum of:
            0.12912004 = weight(_text_:methods in 1515) [ClassicSimilarity], result of:
              0.12912004 = score(doc=1515,freq=14.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.8226646 = fieldWeight in 1515, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    29. 2.2008 19:02:46
    LCSH
    Information science / Statistical methods
    Documentation / Statistical methods
    Library science / Statistical methods
    Subject
    Information science / Statistical methods
    Documentation / Statistical methods
    Library science / Statistical methods
  2. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.02
    0.015213262 = product of:
      0.091279574 = sum of:
        0.091279574 = sum of:
          0.064593196 = weight(_text_:theory in 194) [ClassicSimilarity], result of:
            0.064593196 = score(doc=194,freq=6.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.39788827 = fieldWeight in 194, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0390625 = fieldNorm(doc=194)
          0.02668638 = weight(_text_:29 in 194) [ClassicSimilarity], result of:
            0.02668638 = score(doc=194,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.19432661 = fieldWeight in 194, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0390625 = fieldNorm(doc=194)
      0.16666667 = coord(1/6)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
    Date
    3. 1.2007 14:26:29
  3. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.01
    0.012261091 = product of:
      0.07356654 = sum of:
        0.07356654 = sum of:
          0.041830957 = weight(_text_:methods in 2558) [ClassicSimilarity], result of:
            0.041830957 = score(doc=2558,freq=2.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.26651827 = fieldWeight in 2558, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.046875 = fieldNorm(doc=2558)
          0.03173558 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
            0.03173558 = score(doc=2558,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.23214069 = fieldWeight in 2558, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2558)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
  4. Egghe, L.: ¬The measures precision, recall, fallout and miss as a function of the number of retrieved documents and their mutual interrelations (2008) 0.01
    0.0120253395 = product of:
      0.036076017 = sum of:
        0.018646449 = product of:
          0.037292898 = sum of:
            0.037292898 = weight(_text_:theory in 2067) [ClassicSimilarity], result of:
              0.037292898 = score(doc=2067,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2297209 = fieldWeight in 2067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2067)
          0.5 = coord(1/2)
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 2067) [ClassicSimilarity], result of:
              0.034859132 = score(doc=2067,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 2067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2067)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper, for the first time, we present global curves for the measures precision, recall, fallout and miss in function of the number of retrieved documents. Different curves apply for different retrieved systems, for which we give exact definitions in terms of a retrieval density function: perverse retrieval, perfect retrieval, random retrieval, normal retrieval, hereby extending results of Buckland and Gey and of Egghe in the following sense: mathematically more advanced methods yield a better insight into these curves, more types of retrieval are considered and, very importantly, the theory is developed for the "complete" set of measures: precision, recall, fallout and miss. Next we study the interrelationships between precision, recall, fallout and miss in these different types of retrieval, hereby again extending results of Buckland and Gey (incl. a correction) and of Egghe. In the case of normal retrieval we prove that precision in function of recall and recall in function of miss is a concavely decreasing relationship while recall in function of fallout is a concavely increasing relationship. We also show, by producing examples, that the relationships between fallout and precision, miss and precision and miss and fallout are not always convex or concave.
  5. Egghe, L.: Special features of the author - publication relationship and a new explanation of Lotka's law based on convolution theory (1994) 0.01
    0.0074585797 = product of:
      0.044751476 = sum of:
        0.044751476 = product of:
          0.08950295 = sum of:
            0.08950295 = weight(_text_:theory in 5068) [ClassicSimilarity], result of:
              0.08950295 = score(doc=5068,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.55133015 = fieldWeight in 5068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5068)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  6. Egghe, L.: Mathematical theories of citation (1998) 0.01
    0.0070320163 = product of:
      0.042192098 = sum of:
        0.042192098 = product of:
          0.084384196 = sum of:
            0.084384196 = weight(_text_:theory in 5125) [ClassicSimilarity], result of:
              0.084384196 = score(doc=5125,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.51979905 = fieldWeight in 5125, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5125)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Focuses on possible mathematical theories of citation and on the intrinsic problems related to it. Sheds light on aspects of mathematical complexity as encountered in, for example, fractal theory and Mandelbrot's law. Also discusses dynamical aspects of citation theory as reflected in evolutions of journal rankings, centres of gravity or of the set of source journals. Makes some comments in this connection on growth and obsolescence
  7. Egghe, L.: Zipfian and Lotkaian continuous concentration theory (2005) 0.01
    0.0064593195 = product of:
      0.038755916 = sum of:
        0.038755916 = product of:
          0.07751183 = sum of:
            0.07751183 = weight(_text_:theory in 3678) [ClassicSimilarity], result of:
              0.07751183 = score(doc=3678,freq=6.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.47746593 = fieldWeight in 3678, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3678)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article concentration (i.e., inequality) aspects of the functions of Zipf and of Lotka are studied. Since both functions are power laws (i.e., they are mathematically the same) it suffices to develop one concentration theory for power laws and apply it twice for the different interpretations of the laws of Zipf and Lotka. After a brief repetition of the functional relationships between Zipf's law and Lotka's law, we prove that Price's law of concentration is equivalent with Zipf's law. A major part of this article is devoted to the development of continuous concentration theory, based an Lorenz curves. The Lorenz curve for power functions is calculated and, based an this, some important concentration measures such as the ones of Gini, Theil, and the variation coefficient. Using Lorenz curves, it is shown that the concentration of a power law increases with its exponent and this result is interpreted in terms of the functions of Zipf and Lotka.
  8. Egghe, L.: Theory of the topical coverage of multiple databases (2013) 0.01
    0.0061530145 = product of:
      0.036918085 = sum of:
        0.036918085 = product of:
          0.07383617 = sum of:
            0.07383617 = weight(_text_:theory in 526) [ClassicSimilarity], result of:
              0.07383617 = score(doc=526,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.45482418 = fieldWeight in 526, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=526)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    We present a model that describes which fraction of the literature on a certain topic we will find when we use n (n = 1, 2, .) databases. It is a generalization of the theory of discovering usability problems. We prove that, in all practical cases, this fraction is a concave function of n, the number of used databases, thereby explaining some graphs that exist in the literature. We also study limiting features of this fraction for n very high and we characterize the case that we find all literature on a certain topic for n high enough.
  9. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.01
    0.006037779 = product of:
      0.036226675 = sum of:
        0.036226675 = product of:
          0.07245335 = sum of:
            0.07245335 = weight(_text_:methods in 4384) [ClassicSimilarity], result of:
              0.07245335 = score(doc=4384,freq=6.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.4616232 = fieldWeight in 4384, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4384)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. Conseqeuntly, a ranking between countries, universities, research groups or authors, based on one particular accrediting methods does not contain an absolute truth about their relative importance
  10. Egghe, L.: ¬The amount of actions needed for shelving and reshelving (1996) 0.00
    0.0049723866 = product of:
      0.029834319 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 4394) [ClassicSimilarity], result of:
              0.059668638 = score(doc=4394,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 4394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4394)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Discusses the number of actions (or time) needed to organize library shelves. Studies 2 types pf problem: organizing a library shelf out of an unordered pile of books, and putting an existing shelf of books in the rough order. Uses results from information theory as well as from rank order statistics (runs). Draws conclusions about the advised frequency with which these actions should be undertaken
  11. Egghe, L.; Rousseau, R.: ¬The influence of publication delays on the observed aging distribution of scientific literature (2000) 0.00
    0.0049723866 = product of:
      0.029834319 = sum of:
        0.029834319 = product of:
          0.059668638 = sum of:
            0.059668638 = weight(_text_:theory in 4385) [ClassicSimilarity], result of:
              0.059668638 = score(doc=4385,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.36755344 = fieldWeight in 4385, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4385)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Observed aging curves are influenced by publication delays. In this article, we show how the 'undisturbed' aging function and the publication delay combine to give the observed aging function. This combination is performed by a mathematical operation known as convolution. Examples are given, such as the convolution of 2 Poisson distributions, 2 exponential distributions, a 2 lognormal distributions. A paradox is observed between theory and real data
  12. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.00
    0.0044077197 = product of:
      0.026446318 = sum of:
        0.026446318 = product of:
          0.052892637 = sum of:
            0.052892637 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.052892637 = score(doc=4992,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    14. 2.2012 12:53:22
  13. Egghe, L.: Relations between the continuous and the discrete Lotka power function (2005) 0.00
    0.0037292899 = product of:
      0.022375738 = sum of:
        0.022375738 = product of:
          0.044751476 = sum of:
            0.044751476 = weight(_text_:theory in 3464) [ClassicSimilarity], result of:
              0.044751476 = score(doc=3464,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27566507 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The discrete Lotka power function describes the number of sources (e.g., authors) with n = 1, 2, 3, ... items (e.g., publications). As in econometrics, informetrics theory requires functions of a continuous variable j, replacing the discrete variable n. Now j represents item densities instead of number of items. The continuous Lotka power function describes the density of sources with item density j. The discrete Lotka function one obtains from data, obtained empirically; the continuous Lotka function is the one needed when one wants to apply Lotkaian informetrics, i.e., to determine properties that can be derived from the (continuous) model. It is, hence, important to know the relations between the two models. We show that the exponents of the discrete Lotka function (if not too high, i.e., within limits encountered in practice) and of the continuous Lotka function are approximately the same. This is important to know in applying theoretical results (from the continuous model), derived from practical data.
  14. Egghe, L.: Mathematical theory of the h- and g-index in case of fractional counting of authorship (2008) 0.00
    0.0037292899 = product of:
      0.022375738 = sum of:
        0.022375738 = product of:
          0.044751476 = sum of:
            0.044751476 = weight(_text_:theory in 2004) [ClassicSimilarity], result of:
              0.044751476 = score(doc=2004,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27566507 = fieldWeight in 2004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2004)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  15. Egghe, L.: Good properties of similarity measures and their complementarity (2010) 0.00
    0.0037292899 = product of:
      0.022375738 = sum of:
        0.022375738 = product of:
          0.044751476 = sum of:
            0.044751476 = weight(_text_:theory in 3993) [ClassicSimilarity], result of:
              0.044751476 = score(doc=3993,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27566507 = fieldWeight in 3993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3993)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Similarity measures, such as the ones of Jaccard, Dice, or Cosine, measure the similarity between two vectors. A good property for similarity measures would be that, if we add a constant vector to both vectors, then the similarity must increase. We show that Dice and Jaccard satisfy this property while Cosine and both overlap measures do not. Adding a constant vector is called, in Lorenz concentration theory, "nominal increase" and we show that the stronger "transfer principle" is not a required good property for similarity measures. Another good property is that, when we have two vectors and if we add one of these vectors to both vectors, then the similarity must increase. Now Dice, Jaccard, Cosine, and one of the overlap measures satisfy this property, while the other overlap measure does not. Also a variant of this latter property is studied.
  16. Egghe, L.: ¬A noninformetric analysis of the relationship between citation age and journal productivity (2001) 0.00
    0.0026686378 = product of:
      0.016011827 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 5685) [ClassicSimilarity], result of:
              0.032023653 = score(doc=5685,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 5685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5685)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    29. 9.2001 13:59:34
  17. Egghe, L.: Influence of adding or deleting items and sources on the h-index (2010) 0.00
    0.0026686378 = product of:
      0.016011827 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 3336) [ClassicSimilarity], result of:
              0.032023653 = score(doc=3336,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 3336, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3336)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    31. 5.2010 15:02:29
  18. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.00
    0.0026446318 = product of:
      0.01586779 = sum of:
        0.01586779 = product of:
          0.03173558 = sum of:
            0.03173558 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.03173558 = score(doc=7659,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  19. Egghe, L.: Untangling Herdan's law and Heaps' law : mathematical and informetric arguments (2007) 0.00
    0.0022238651 = product of:
      0.01334319 = sum of:
        0.01334319 = product of:
          0.02668638 = sum of:
            0.02668638 = weight(_text_:29 in 271) [ClassicSimilarity], result of:
              0.02668638 = score(doc=271,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.19432661 = fieldWeight in 271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=271)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    29. 4.2007 19:51:08