Search (57 results, page 1 of 3)

  • × author_ss:"Egghe, L."
  1. Egghe, L.: Existence theorem of the quadruple (P, R, F, M) : precision, recall, fallout and miss (2007) 0.01
    0.009542295 = product of:
      0.057253767 = sum of:
        0.012138106 = weight(_text_:information in 2011) [ClassicSimilarity], result of:
          0.012138106 = score(doc=2011,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.20156369 = fieldWeight in 2011, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2011)
        0.04511566 = weight(_text_:system in 2011) [ClassicSimilarity], result of:
          0.04511566 = score(doc=2011,freq=8.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.41757566 = fieldWeight in 2011, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2011)
      0.16666667 = coord(2/12)
    
    Abstract
    In an earlier paper [Egghe, L. (2004). A universal method of information retrieval evaluation: the "missing" link M and the universal IR surface. Information Processing and Management, 40, 21-30] we showed that, given an IR system, and if P denotes precision, R recall, F fallout and M miss (re-introduced in the paper mentioned above), we have the following relationship between P, R, F and M: P/(1-P)*(1-R)/R*F/(1-F)*(1-M)/M = 1. In this paper we prove the (more difficult) converse: given any four rational numbers in the interval ]0, 1[ satisfying the above equation, then there exists an IR system such that these four numbers (in any order) are the precision, recall, fallout and miss of this IR system. As a consequence we show that any three rational numbers in ]0, 1[ represent any three measures taken from precision, recall, fallout and miss of a certain IR system. We also show that this result is also true for two numbers instead of three.
    Source
    Information processing and management. 43(2007) no.1, S.265-272
  2. Egghe, L.: ¬A new short proof of Naranan's theorem, explaining Lotka's law and Zipf's law (2010) 0.01
    0.0075657414 = product of:
      0.045394447 = sum of:
        0.008175928 = weight(_text_:information in 3432) [ClassicSimilarity], result of:
          0.008175928 = score(doc=3432,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13576832 = fieldWeight in 3432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3432)
        0.03721852 = weight(_text_:system in 3432) [ClassicSimilarity], result of:
          0.03721852 = score(doc=3432,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.34448233 = fieldWeight in 3432, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3432)
      0.16666667 = coord(2/12)
    
    Abstract
    Naranan's important theorem, published in Nature in 1970, states that if the number of journals grows exponentially and if the number of articles in each journal grows exponentially (at the same rate for each journal), then the system satisfies Lotka's law and a formula for the Lotka's exponent is given in function of the growth rates of the journals and the articles. This brief communication re-proves this result by showing that the system satisfies Zipf's law, which is equivalent with Lotka's law. The proof is short and algebraic and does not use infinitesimal arguments.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2581-2583
  3. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.01
    0.006968718 = product of:
      0.041812308 = sum of:
        0.009910721 = weight(_text_:information in 4384) [ClassicSimilarity], result of:
          0.009910721 = score(doc=4384,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.16457605 = fieldWeight in 4384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4384)
        0.031901587 = weight(_text_:system in 4384) [ClassicSimilarity], result of:
          0.031901587 = score(doc=4384,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.29527056 = fieldWeight in 4384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4384)
      0.16666667 = coord(2/12)
    
    Abstract
    One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. Conseqeuntly, a ranking between countries, universities, research groups or authors, based on one particular accrediting methods does not contain an absolute truth about their relative importance
    Source
    Journal of the American Society for Information Science. 51(2000) no.2, S.145-157
  4. Egghe, L.; Rousseau, R.: Topological aspects of information retrieval (1998) 0.01
    0.006746432 = product of:
      0.04047859 = sum of:
        0.014161124 = weight(_text_:information in 2157) [ClassicSimilarity], result of:
          0.014161124 = score(doc=2157,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23515764 = fieldWeight in 2157, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2157)
        0.026317468 = weight(_text_:system in 2157) [ClassicSimilarity], result of:
          0.026317468 = score(doc=2157,freq=2.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.2435858 = fieldWeight in 2157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2157)
      0.16666667 = coord(2/12)
    
    Abstract
    Let (DS, DQ, sim) be a retrieval system consisting of a document space DS, a query space QS, and a function sim, expressing the similarity between a document and a query. Following D.M. Everett and S.C. Cater (1992), we introduce topologies on the document space. These topologies are generated by the similarity function sim and the query space QS. 3 topologies will be studied: the retrieval topology, the similarity topology and the (pseudo-)metric one. It is shown that the retrieval topology is the coarsest of the three, while the (pseudo-)metric is the strongest. These 3 topologies are generally different, reflecting distinct topological aspects of information retrieval. We present necessary and sufficient conditions for these topological aspects to be equal
    Source
    Journal of the American Society for Information Science. 49(1998) no.13, S.1144-1160
  5. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.01
    0.006399895 = product of:
      0.03839937 = sum of:
        0.0058399485 = weight(_text_:information in 194) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=194,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=194)
        0.03255942 = weight(_text_:system in 194) [ClassicSimilarity], result of:
          0.03255942 = score(doc=194,freq=6.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.30135927 = fieldWeight in 194, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=194)
      0.16666667 = coord(2/12)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.9, S.1165-1177
  6. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.01
    0.005819735 = product of:
      0.03491841 = sum of:
        0.011679897 = weight(_text_:information in 4992) [ClassicSimilarity], result of:
          0.011679897 = score(doc=4992,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.19395474 = fieldWeight in 4992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=4992)
        0.023238512 = product of:
          0.046477024 = sum of:
            0.046477024 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.046477024 = score(doc=4992,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Date
    14. 2.2012 12:53:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.429
  7. Egghe, L.: ¬The power of power laws and an interpretation of Lotkaian informetric systems as self-similar fractals (2005) 0.01
    0.0058072656 = product of:
      0.034843594 = sum of:
        0.008258934 = weight(_text_:information in 3466) [ClassicSimilarity], result of:
          0.008258934 = score(doc=3466,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.13714671 = fieldWeight in 3466, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.026584659 = weight(_text_:system in 3466) [ClassicSimilarity], result of:
          0.026584659 = score(doc=3466,freq=4.0), product of:
            0.10804188 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03430388 = queryNorm
            0.24605882 = fieldWeight in 3466, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
      0.16666667 = coord(2/12)
    
    Abstract
    Power laws as defined in 1926 by A. Lotka are increasing in importance because they have been found valid in varied social networks including the Internet. In this article some unique properties of power laws are proven. They are shown to characterize functions with the scalefree property (also called seif-similarity property) as weIl as functions with the product property. Power laws have other desirable properties that are not shared by exponential laws, as we indicate in this paper. Specifically, Naranan (1970) proves the validity of Lotka's law based on the exponential growth of articles in journals and of the number of journals. His argument is reproduced here and a discrete-time argument is also given, yielding the same law as that of Lotka. This argument makes it possible to interpret the information production process as a seif-similar fractal and show the relation between Lotka's exponent and the (seif-similar) fractal dimension of the system. Lotkaian informetric systems are seif-similar fractals, a fact revealed by Mandelbrot (1977) in relation to nature, but is also true for random texts, which exemplify a very special type of informetric system.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.7, S.669-675
  8. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.00
    0.004346869 = product of:
      0.026081212 = sum of:
        0.012138106 = weight(_text_:information in 2558) [ClassicSimilarity], result of:
          0.012138106 = score(doc=2558,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.20156369 = fieldWeight in 2558, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.013943106 = product of:
          0.027886212 = sum of:
            0.027886212 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.027886212 = score(doc=2558,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
    Source
    Information processing and management. 40(2004) no.1, S.21-30
  9. Egghe, L.; Ravichandra Rao, I.K.: ¬The influence of the broadness of a query of a topic on its h-index : models and examples of the h-index of n-grams (2008) 0.00
    0.004337178 = product of:
      0.026023068 = sum of:
        0.02018312 = weight(_text_:web in 2009) [ClassicSimilarity], result of:
          0.02018312 = score(doc=2009,freq=2.0), product of:
            0.111951075 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03430388 = queryNorm
            0.18028519 = fieldWeight in 2009, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2009)
        0.0058399485 = weight(_text_:information in 2009) [ClassicSimilarity], result of:
          0.0058399485 = score(doc=2009,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.09697737 = fieldWeight in 2009, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2009)
      0.16666667 = coord(2/12)
    
    Abstract
    The article studies the influence of the query formulation of a topic on its h-index. In order to generate pure random sets of documents, we used N-grams (N variable) to measure this influence: strings of zeros, truncated at the end. The used databases are WoS and Scopus. The formula h=T**1/alpha, proved in Egghe and Rousseau (2006) where T is the number of retrieved documents and is Lotka's exponent, is confirmed being a concavely increasing function of T. We also give a formula for the relation between h and N the length of the N-gram: h=D10**(-N/alpha) where D is a constant, a convexly decreasing function, which is found in our experiments. Nonlinear regression on h=T**1/alpha gives an estimation of , which can then be used to estimate the h-index of the entire database (Web of Science [WoS] and Scopus): h=S**1/alpha, , where S is the total number of documents in the database.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.10, S.1688-1693
  10. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.00
    0.0034918408 = product of:
      0.020951044 = sum of:
        0.0070079383 = weight(_text_:information in 7659) [ClassicSimilarity], result of:
          0.0070079383 = score(doc=7659,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.116372846 = fieldWeight in 7659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.013943106 = product of:
          0.027886212 = sum of:
            0.027886212 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.027886212 = score(doc=7659,freq=2.0), product of:
                0.120126344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03430388 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  11. Egghe, L.; Rousseau, R.: Introduction to informetrics : quantitative methods in library, documentation and information science (1990) 0.00
    0.0015234943 = product of:
      0.018281931 = sum of:
        0.018281931 = weight(_text_:information in 1515) [ClassicSimilarity], result of:
          0.018281931 = score(doc=1515,freq=10.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.3035872 = fieldWeight in 1515, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1515)
      0.083333336 = coord(1/12)
    
    COMPASS
    Information science / Statistical mathematics
    LCSH
    Information science / Statistical methods
    Subject
    Information science / Statistical mathematics
    Information science / Statistical methods
  12. Egghe, L.; Rousseau, R.: ¬A theoretical study of recall and precision using a topological approach to information retrieval (1998) 0.00
    0.0013486785 = product of:
      0.016184142 = sum of:
        0.016184142 = weight(_text_:information in 3267) [ClassicSimilarity], result of:
          0.016184142 = score(doc=3267,freq=6.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.2687516 = fieldWeight in 3267, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3267)
      0.083333336 = coord(1/12)
    
    Abstract
    Topologies for information retrieval systems are generated by certain subsets, called retrievals. Shows how recall and precision can be expressed using only retrievals. Investigates different types of retrieval systems: both threshold systems and close match systems and both optimal and non optimal retrieval. Highlights the relation with the hypergeometric and some non-standard distributions
    Source
    Information processing and management. 34(1998) nos.2/3, S.191-218
  13. Egghe, L.: Expansion of the field of informetrics : the second special issue (2006) 0.00
    0.0011679898 = product of:
      0.014015877 = sum of:
        0.014015877 = weight(_text_:information in 7119) [ClassicSimilarity], result of:
          0.014015877 = score(doc=7119,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23274569 = fieldWeight in 7119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
      0.083333336 = coord(1/12)
    
    Source
    Information processing and management. 42(2006) no.6, S.1405-1407
  14. Egghe, L.: Expansion of the field of informetrics : origins and consequences (2005) 0.00
    0.0011679898 = product of:
      0.014015877 = sum of:
        0.014015877 = weight(_text_:information in 1910) [ClassicSimilarity], result of:
          0.014015877 = score(doc=1910,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23274569 = fieldWeight in 1910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
      0.083333336 = coord(1/12)
    
    Source
    Information processing and management. 41(2005) no.6, S.1311-1316
  15. Egghe, L.: Special features of the author - publication relationship and a new explanation of Lotka's law based on convolution theory (1994) 0.00
    0.0011679898 = product of:
      0.014015877 = sum of:
        0.014015877 = weight(_text_:information in 5068) [ClassicSimilarity], result of:
          0.014015877 = score(doc=5068,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23274569 = fieldWeight in 5068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5068)
      0.083333336 = coord(1/12)
    
    Source
    Journal of the American Society for Information Science. 45(1994) no.6, S.422-427
  16. Egghe, L.: Note on a possible decomposition of the h-Index (2013) 0.00
    0.0011679898 = product of:
      0.014015877 = sum of:
        0.014015877 = weight(_text_:information in 683) [ClassicSimilarity], result of:
          0.014015877 = score(doc=683,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23274569 = fieldWeight in 683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=683)
      0.083333336 = coord(1/12)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.871
  17. Egghe, L.: ¬The Hirsch index and related impact measures (2010) 0.00
    0.0011679898 = product of:
      0.014015877 = sum of:
        0.014015877 = weight(_text_:information in 1597) [ClassicSimilarity], result of:
          0.014015877 = score(doc=1597,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.23274569 = fieldWeight in 1597, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1597)
      0.083333336 = coord(1/12)
    
    Source
    Annual review of information science and technology. 44(2010) no.1, S.65-114
  18. Egghe, L.; Rousseau, R.: Duality in information retrieval and the hypegeometric distribution (1997) 0.00
    0.0011011913 = product of:
      0.013214295 = sum of:
        0.013214295 = weight(_text_:information in 647) [ClassicSimilarity], result of:
          0.013214295 = score(doc=647,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.21943474 = fieldWeight in 647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=647)
      0.083333336 = coord(1/12)
    
    Abstract
    Asserts that duality is an important topic in informetrics, especially in connection with the classical informetric laws. Yet this concept is less studied in information retrieval. It deals with the unification or symmetry between queries and documents, search formulation versus indexing, and relevant versus retrieved documents. Elaborates these ideas and highlights the connection with the hypergeometric distribution
  19. Egghe, L.: ¬A good normalized impact and concentration measure (2014) 0.00
    9.7332476E-4 = product of:
      0.011679897 = sum of:
        0.011679897 = weight(_text_:information in 1508) [ClassicSimilarity], result of:
          0.011679897 = score(doc=1508,freq=2.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.19395474 = fieldWeight in 1508, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=1508)
      0.083333336 = coord(1/12)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.10, S.2052-2054
  20. Egghe, L.: Vector retrieval, fuzzy retrieval and the universal fuzzy IR surface for IR evaluation (2004) 0.00
    9.635424E-4 = product of:
      0.0115625085 = sum of:
        0.0115625085 = weight(_text_:information in 2531) [ClassicSimilarity], result of:
          0.0115625085 = score(doc=2531,freq=4.0), product of:
            0.060219705 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03430388 = queryNorm
            0.1920054 = fieldWeight in 2531, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2531)
      0.083333336 = coord(1/12)
    
    Abstract
    It is shown that vector information retrieval (IR) and general fuzzy IR uses two types of fuzzy set operations: the original "Zadeh min-max operations" and the so-called "probabilistic sum and algebraic product operations". The universal IR surface, valid for classical 0-1 IR (i.e. where ordinary sets are used) and used in IR evaluation, is extended to and reproved for vector IR, using the probabilistic sum and algebraic product model. We also show (by counterexample) that, using the "Zadeh min-max" fuzzy model, yields a breakdown of this IR surface.
    Source
    Information processing and management. 40(2004) no.4, S.603-618