Search (60 results, page 1 of 3)

  • × author_ss:"Egghe, L."
  1. Egghe, L.: Expansion of the field of informetrics : the second special issue (2006) 0.18
    0.1810405 = product of:
      0.27156073 = sum of:
        0.022526272 = weight(_text_:of in 7119) [ClassicSimilarity], result of:
          0.022526272 = score(doc=7119,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 7119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
        0.24903446 = product of:
          0.49806893 = sum of:
            0.49806893 = weight(_text_:informetrics in 7119) [ClassicSimilarity], result of:
              0.49806893 = score(doc=7119,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.3787029 = fieldWeight in 7119, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7119)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Einführung in ein "Special Issue on Informetrics"
  2. Egghe, L.: Type/Token-Taken informetrics (2003) 0.18
    0.17690733 = product of:
      0.26536098 = sum of:
        0.022011995 = weight(_text_:of in 1608) [ClassicSimilarity], result of:
          0.022011995 = score(doc=1608,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.28651062 = fieldWeight in 1608, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
        0.24334899 = product of:
          0.48669797 = sum of:
            0.48669797 = weight(_text_:informetrics in 1608) [ClassicSimilarity], result of:
              0.48669797 = score(doc=1608,freq=22.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.347227 = fieldWeight in 1608, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.603-610
  3. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.16
    0.16458546 = product of:
      0.24687818 = sum of:
        0.03084537 = weight(_text_:of in 7659) [ClassicSimilarity], result of:
          0.03084537 = score(doc=7659,freq=30.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.4014868 = fieldWeight in 7659, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.2160328 = sum of:
          0.17609395 = weight(_text_:informetrics in 7659) [ClassicSimilarity], result of:
            0.17609395 = score(doc=7659,freq=2.0), product of:
              0.36125907 = queryWeight, product of:
                7.3530817 = idf(docFreq=76, maxDocs=44218)
                0.049130294 = queryNorm
              0.48744506 = fieldWeight in 7659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.3530817 = idf(docFreq=76, maxDocs=44218)
                0.046875 = fieldNorm(doc=7659)
          0.039938856 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
            0.039938856 = score(doc=7659,freq=2.0), product of:
              0.17204592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049130294 = queryNorm
              0.23214069 = fieldWeight in 7659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=7659)
      0.6666667 = coord(2/3)
    
    Abstract
    It is possible, using ISI's Journal Citation Report (JCR), to calculate average impact factors (AIF) for LCR's subject categories but it can be more useful to know the global Impact Factor (GIF) of a subject category and compare the 2 values. Reports results of a study to compare the relationships between AIFs and GIFs of subjects, based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference being studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact factor of a field, and GQ becomes its global impact factor. Discusses a number of applications of this technique in the context of informetrics and scientometrics
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  4. Egghe, L.: Little science, big science and beyond (1994) 0.15
    0.14935078 = product of:
      0.22402616 = sum of:
        0.018583227 = weight(_text_:of in 6883) [ClassicSimilarity], result of:
          0.018583227 = score(doc=6883,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.24188137 = fieldWeight in 6883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=6883)
        0.20544294 = product of:
          0.41088587 = sum of:
            0.41088587 = weight(_text_:informetrics in 6883) [ClassicSimilarity], result of:
              0.41088587 = score(doc=6883,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.1373718 = fieldWeight in 6883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6883)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Discusses the quality of bibliometrics, informetrics and scientometrics research, intradisciplinary communication and science policy
  5. Egghe, L.: Expansion of the field of informetrics : origins and consequences (2005) 0.13
    0.13241349 = product of:
      0.19862023 = sum of:
        0.022526272 = weight(_text_:of in 1910) [ClassicSimilarity], result of:
          0.022526272 = score(doc=1910,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 1910, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
        0.17609395 = product of:
          0.3521879 = sum of:
            0.3521879 = weight(_text_:informetrics in 1910) [ClassicSimilarity], result of:
              0.3521879 = score(doc=1910,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.9748901 = fieldWeight in 1910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1910)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  6. Egghe, L.: Relations between the continuous and the discrete Lotka power function (2005) 0.10
    0.098029 = product of:
      0.1470435 = sum of:
        0.022526272 = weight(_text_:of in 3464) [ClassicSimilarity], result of:
          0.022526272 = score(doc=3464,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 3464, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.12451723 = product of:
          0.24903446 = sum of:
            0.24903446 = weight(_text_:informetrics in 3464) [ClassicSimilarity], result of:
              0.24903446 = score(doc=3464,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.68935144 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discrete Lotka power function describes the number of sources (e.g., authors) with n = 1, 2, 3, ... items (e.g., publications). As in econometrics, informetrics theory requires functions of a continuous variable j, replacing the discrete variable n. Now j represents item densities instead of number of items. The continuous Lotka power function describes the density of sources with item density j. The discrete Lotka function one obtains from data, obtained empirically; the continuous Lotka function is the one needed when one wants to apply Lotkaian informetrics, i.e., to determine properties that can be derived from the (continuous) model. It is, hence, important to know the relations between the two models. We show that the exponents of the discrete Lotka function (if not too high, i.e., within limits encountered in practice) and of the continuous Lotka function are approximately the same. This is important to know in applying theoretical results (from the continuous model), derived from practical data.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.7, S.664-668
  7. Egghe, L.; Rousseau, R.; Rousseau, S.: TOP-curves (2007) 0.09
    0.08706421 = product of:
      0.13059631 = sum of:
        0.027874837 = weight(_text_:of in 50) [ClassicSimilarity], result of:
          0.027874837 = score(doc=50,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.36282203 = fieldWeight in 50, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
        0.10272147 = product of:
          0.20544294 = sum of:
            0.20544294 = weight(_text_:informetrics in 50) [ClassicSimilarity], result of:
              0.20544294 = score(doc=50,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5686859 = fieldWeight in 50, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=50)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Several characteristics of classical Lorenz curves make them unsuitable for the study of a group of topperformers. TOP-curves, defined as a kind of mirror image of TIP-curves used in poverty studies, are shown to possess the properties necessary for adequate empirical ranking of various data arrays, based on the properties of the highest performers (i.e., the core). TOP-curves and essential TOP-curves, also introduced in this article, simultaneously represent the incidence, intensity, and inequality among the top. It is shown that TOPdominance partial order, introduced in this article, is stronger than Lorenz dominance order. In this way, this article contributes to the study of cores, a central issue in applied informetrics.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.777-785
  8. Egghe, L.; Rousseau, R.: Duality in information retrieval and the hypegeometric distribution (1997) 0.09
    0.0853433 = product of:
      0.12801495 = sum of:
        0.010618987 = weight(_text_:of in 647) [ClassicSimilarity], result of:
          0.010618987 = score(doc=647,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.13821793 = fieldWeight in 647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=647)
        0.11739596 = product of:
          0.23479192 = sum of:
            0.23479192 = weight(_text_:informetrics in 647) [ClassicSimilarity], result of:
              0.23479192 = score(doc=647,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.6499267 = fieldWeight in 647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0625 = fieldNorm(doc=647)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Asserts that duality is an important topic in informetrics, especially in connection with the classical informetric laws. Yet this concept is less studied in information retrieval. It deals with the unification or symmetry between queries and documents, search formulation versus indexing, and relevant versus retrieved documents. Elaborates these ideas and highlights the connection with the hypergeometric distribution
    Source
    Journal of documentation. 53(1997) no.5, S.499-496
  9. Egghe, L.; Ravichandra Rao, I.K.: Study of different h-indices for groups of authors (2008) 0.08
    0.07630758 = product of:
      0.11446137 = sum of:
        0.026414396 = weight(_text_:of in 1878) [ClassicSimilarity], result of:
          0.026414396 = score(doc=1878,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34381276 = fieldWeight in 1878, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1878)
        0.088046975 = product of:
          0.17609395 = sum of:
            0.17609395 = weight(_text_:informetrics in 1878) [ClassicSimilarity], result of:
              0.17609395 = score(doc=1878,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.48744506 = fieldWeight in 1878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1878)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, for any group of authors, we define three different h-indices. First, there is the successive h-index h2 based on the ranked list of authors and their h-indices h1 as defined by Schubert (2007). Next, there is the h-index hP based on the ranked list of authors and their number of publications. Finally, there is the h-index hC based on the ranked list of authors and their number of citations. We present formulae for these three indices in Lotkaian informetrics from which it also follows that h2 < hp < hc. We give a concrete example of a group of 167 authors on the topic optical flow estimation. Besides these three h-indices, we also calculate the two-by-two Spearman rank correlation coefficient and prove that these rankings are significantly related.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1276-1281
  10. Egghe, L.: ¬A noninformetric analysis of the relationship between citation age and journal productivity (2001) 0.07
    0.072745584 = product of:
      0.10911837 = sum of:
        0.021071399 = weight(_text_:of in 5685) [ClassicSimilarity], result of:
          0.021071399 = score(doc=5685,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2742677 = fieldWeight in 5685, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5685)
        0.088046975 = product of:
          0.17609395 = sum of:
            0.17609395 = weight(_text_:informetrics in 5685) [ClassicSimilarity], result of:
              0.17609395 = score(doc=5685,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.48744506 = fieldWeight in 5685, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5685)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A problem, raised by Wallace (JASIS, 37,136-145,1986), on the relation between the journal's median citation age and its number of articles is studied. Leaving open the problem as such, we give a statistical explanation of this relationship, when replacing "median" by "mean" in Wallace's problem. The cloud of points, found by Wallace, is explained in this sense that the points are scattered over the area in first quadrant, limited by a curve of the form y=1 + E/x**2 where E is a constant. This curve is obtained by using the Central Limit Theorem in statistics and, hence, has no intrinsic informetric foundation. The article closes with some reflections on explanations of regularities in informetrics, based on statistical, probabilistic or informetric results, or on a combination thereof
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.5, S.371-377
  11. Egghe, L.: Untangling Herdan's law and Heaps' law : mathematical and informetric arguments (2007) 0.06
    0.06290673 = product of:
      0.09436009 = sum of:
        0.020987613 = weight(_text_:of in 271) [ClassicSimilarity], result of:
          0.020987613 = score(doc=271,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27317715 = fieldWeight in 271, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
        0.073372476 = product of:
          0.14674495 = sum of:
            0.14674495 = weight(_text_:informetrics in 271) [ClassicSimilarity], result of:
              0.14674495 = score(doc=271,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.4062042 = fieldWeight in 271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=271)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Herdan's law in linguistics and Heaps' law in information retrieval are different formulations of the same phenomenon. Stated briefly and in linguistic terms they state that vocabularies' sizes are concave increasing power laws of texts' sizes. This study investigates these laws from a purely mathematical and informetric point of view. A general informetric argument shows that the problem of proving these laws is, in fact, ill-posed. Using the more general terminology of sources and items, the author shows by presenting exact formulas from Lotkaian informetrics that the total number T of sources is not only a function of the total number A of items, but is also a function of several parameters (e.g., the parameters occurring in Lotka's law). Consequently, it is shown that a fixed T(or A) value can lead to different possible A (respectively, T) values. Limiting the T(A)-variability to increasing samples (e.g., in a text as done in linguistics) the author then shows, in a purely mathematical way, that for large sample sizes T~ A**phi, where phi is a constant, phi < 1 but close to 1, hence roughly, Heaps' or Herdan's law can be proved without using any linguistic or informetric argument. The author also shows that for smaller samples, a is not a constant but essentially decreases as confirmed by practical examples. Finally, an exact informetric argument on random sampling in the items shows that, in most cases, T= T(A) is a concavely increasing function, in accordance with practical examples.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.5, S.702-709
  12. Egghe, L.; Rousseau, R.: Introduction to informetrics : quantitative methods in library, documentation and information science (1990) 0.03
    0.03424049 = product of:
      0.10272147 = sum of:
        0.10272147 = product of:
          0.20544294 = sum of:
            0.20544294 = weight(_text_:informetrics in 1515) [ClassicSimilarity], result of:
              0.20544294 = score(doc=1515,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5686859 = fieldWeight in 1515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  13. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.03
    0.031037413 = product of:
      0.04655612 = sum of:
        0.013273734 = weight(_text_:of in 4992) [ClassicSimilarity], result of:
          0.013273734 = score(doc=4992,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.17277241 = fieldWeight in 4992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=4992)
        0.033282384 = product of:
          0.06656477 = sum of:
            0.06656477 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.06656477 = score(doc=4992,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    14. 2.2012 12:53:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.429
  14. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.03
    0.028330466 = product of:
      0.042495698 = sum of:
        0.022526272 = weight(_text_:of in 2558) [ClassicSimilarity], result of:
          0.022526272 = score(doc=2558,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 2558, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.019969428 = product of:
          0.039938856 = sum of:
            0.039938856 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.039938856 = score(doc=2558,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
  15. Egghe, L.: On the law of Zipf-Mandelbrot for multi-word phrases (1999) 0.01
    0.010618987 = product of:
      0.031856958 = sum of:
        0.031856958 = weight(_text_:of in 3058) [ClassicSimilarity], result of:
          0.031856958 = score(doc=3058,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.41465375 = fieldWeight in 3058, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3058)
      0.33333334 = coord(1/3)
    
    Abstract
    This article studies the probabilities of the occurence of multi-word (m-word) phrases (m=2,3,...) in relation to the probabilities of occurence of the single words. It is well known that, in the latter case, the lae of Zipf is valid (i.e., a power law). We prove that in the case of m-word phrases (m>=2), this is not the case. We present 2 independent proof of this
    Source
    Journal of the American Society for Information Science. 50(1999) no.3, S.233-241
  16. Egghe, L.: Mathematical theories of citation (1998) 0.01
    0.010618987 = product of:
      0.031856958 = sum of:
        0.031856958 = weight(_text_:of in 5125) [ClassicSimilarity], result of:
          0.031856958 = score(doc=5125,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.41465375 = fieldWeight in 5125, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5125)
      0.33333334 = coord(1/3)
    
    Abstract
    Focuses on possible mathematical theories of citation and on the intrinsic problems related to it. Sheds light on aspects of mathematical complexity as encountered in, for example, fractal theory and Mandelbrot's law. Also discusses dynamical aspects of citation theory as reflected in evolutions of journal rankings, centres of gravity or of the set of source journals. Makes some comments in this connection on growth and obsolescence
    Footnote
    Contribution to a thematic issue devoted to 'Theories of citation?'
  17. Egghe, L.: ¬A model for the size-frequency function of coauthor pairs (2008) 0.01
    0.01028179 = product of:
      0.03084537 = sum of:
        0.03084537 = weight(_text_:of in 2366) [ClassicSimilarity], result of:
          0.03084537 = score(doc=2366,freq=30.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.4014868 = fieldWeight in 2366, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2366)
      0.33333334 = coord(1/3)
    
    Abstract
    Lotka's law was formulated to describe the number of authors with a certain number of publications. Empirical results (Morris & Goldstein, 2007) indicate that Lotka's law is also valid if one counts the number of publications of coauthor pairs. This article gives a simple model proving this to be true, with the same Lotka exponent, if the number of coauthored papers is proportional to the number of papers of the individual coauthors. Under the assumption that this number of coauthored papers is more than proportional to the number of papers of the individual authors (to be explained in the article), we can prove that the size-frequency function of coauthor pairs is Lotkaian with an exponent that is higher than that of the Lotka function of individual authors, a fact that is confirmed in experimental results.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.13, S.2133-2137
  18. Egghe, L.: Dynamic h-index : the Hirsch index in function of time (2007) 0.01
    0.010011677 = product of:
      0.03003503 = sum of:
        0.03003503 = weight(_text_:of in 147) [ClassicSimilarity], result of:
          0.03003503 = score(doc=147,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.39093933 = fieldWeight in 147, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=147)
      0.33333334 = coord(1/3)
    
    Abstract
    When there are a group of articles and the present time is fixed we can determine the unique number h being the number of articles that received h or more citations while the other articles received a number of citations which is not larger than h. In this article, the time dependence of the h-index is determined. This is important to describe the expected career evolution of a scientist's work or of a journal's production in a fixed year.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.3, S.452-454
  19. Egghe, L.: Zipfian and Lotkaian continuous concentration theory (2005) 0.01
    0.009933152 = product of:
      0.029799456 = sum of:
        0.029799456 = weight(_text_:of in 3678) [ClassicSimilarity], result of:
          0.029799456 = score(doc=3678,freq=28.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.38787308 = fieldWeight in 3678, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3678)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article concentration (i.e., inequality) aspects of the functions of Zipf and of Lotka are studied. Since both functions are power laws (i.e., they are mathematically the same) it suffices to develop one concentration theory for power laws and apply it twice for the different interpretations of the laws of Zipf and Lotka. After a brief repetition of the functional relationships between Zipf's law and Lotka's law, we prove that Price's law of concentration is equivalent with Zipf's law. A major part of this article is devoted to the development of continuous concentration theory, based an Lorenz curves. The Lorenz curve for power functions is calculated and, based an this, some important concentration measures such as the ones of Gini, Theil, and the variation coefficient. Using Lorenz curves, it is shown that the concentration of a power law increases with its exponent and this result is interpreted in terms of the functions of Zipf and Lotka.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.9, S.935-945
  20. Egghe, L.: Sampling and concentration values of incomplete bibliographies (2002) 0.01
    0.00979422 = product of:
      0.029382661 = sum of:
        0.029382661 = weight(_text_:of in 450) [ClassicSimilarity], result of:
          0.029382661 = score(doc=450,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.38244802 = fieldWeight in 450, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=450)
      0.33333334 = coord(1/3)
    
    Abstract
    This article studies concentration aspects of bibliographies. More, in particular, we study the impact of incompleteness of such a bibliography on its concentration values (i.e., its degree of inequality of production of its sources). Incompleteness is modeled by sampling in the complete bibliography. The model is general enough to comprise truncation of a bibliography as well as a systematic sample on sources or items. In all cases we prove that the sampled bibliography (or incomplete one) has a higher concentration value than the complete one. These models, hence, shed some light on the measurement of production inequality in incomplete bibliographies.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.4, S.271-281