Search (10 results, page 1 of 1)

  • × author_ss:"Egghe, L."
  1. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.04
    0.03632836 = product of:
      0.05449254 = sum of:
        0.036447987 = weight(_text_:m in 2558) [ClassicSimilarity], result of:
          0.036447987 = score(doc=2558,freq=8.0), product of:
            0.11047362 = queryWeight, product of:
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.044394575 = queryNorm
            0.3299248 = fieldWeight in 2558, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.018044556 = product of:
          0.03608911 = sum of:
            0.03608911 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.03608911 = score(doc=2558,freq=2.0), product of:
                0.15546224 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044394575 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
  2. Egghe, L.: On the relation between the association strength and other similarity measures (2010) 0.02
    0.020338131 = product of:
      0.06101439 = sum of:
        0.06101439 = product of:
          0.12202878 = sum of:
            0.12202878 = weight(_text_:van in 3598) [ClassicSimilarity], result of:
              0.12202878 = score(doc=3598,freq=2.0), product of:
                0.24757032 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.044394575 = queryNorm
                0.49290553 = fieldWeight in 3598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3598)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A graph in van Eck and Waltman [JASIST, 60(8), 2009, p. 1644], representing the relation between the association strength and the cosine, is partially explained as a sheaf of parabolas, each parabola being the functional relation between these similarity measures on the trajectories x*y=a, a constant. Based on earlier obtained relations between cosine and other similarity measures (e.g., Jaccard index), we can prove new relations between the association strength and these other measures.
  3. Egghe, L.: Empirical and combinatorial study of country occurrences in multi-authored papers (2006) 0.02
    0.018558405 = product of:
      0.055675216 = sum of:
        0.055675216 = weight(_text_:m in 81) [ClassicSimilarity], result of:
          0.055675216 = score(doc=81,freq=42.0), product of:
            0.11047362 = queryWeight, product of:
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.044394575 = queryNorm
            0.5039684 = fieldWeight in 81, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.03125 = fieldNorm(doc=81)
      0.33333334 = coord(1/3)
    
    Abstract
    Papers written by several authors can be classified according to the countries of the author affiliations. The empirical part of this paper consists of two datasets. One dataset consists of 1,035 papers retrieved via the search "pedagog*" in the years 2004 and 2005 (up to October) in Academic Search Elite which is a case where phi(m) = the number of papers with m =1, 2,3 ... authors is decreasing, hence most of the papers have a low number of authors. Here we find that #, m = the number of times a country occurs j times in a m-authored paper, j =1, ..., m-1 is decreasing and that # m, m is much higher than all the other #j, m values. The other dataset consists of 3,271 papers retrieved via the search "enzyme" in the year 2005 (up to October) in the same database which is a case of a non-decreasing phi(m): most papers have 3 or 4 authors and we even find many papers with a much higher number of authors. In this case we show again that # m, m is much higher than the other #j, m values but that #j, m is not decreasing anymore in j =1, ..., m-1, although #1, m is (apart from # m, m) the largest number amongst the #j,m. The combinatorial part gives a proof of the fact that #j,m decreases for j = 1, m-1, supposing that all cases are equally possible. This shows that the first dataset is more conform with this model than the second dataset. Explanations for these findings are given. From the data we also find the (we think: new) distribution of number of papers with n =1, 2,3,... countries (i.e. where there are n different countries involved amongst the m (a n) authors of a paper): a fast decreasing function e.g. as a power law with a very large Lotka exponent.
  4. Egghe, L.: On the law of Zipf-Mandelbrot for multi-word phrases (1999) 0.02
    0.016199104 = product of:
      0.048597313 = sum of:
        0.048597313 = weight(_text_:m in 3058) [ClassicSimilarity], result of:
          0.048597313 = score(doc=3058,freq=8.0), product of:
            0.11047362 = queryWeight, product of:
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.044394575 = queryNorm
            0.4398997 = fieldWeight in 3058, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.0625 = fieldNorm(doc=3058)
      0.33333334 = coord(1/3)
    
    Abstract
    This article studies the probabilities of the occurence of multi-word (m-word) phrases (m=2,3,...) in relation to the probabilities of occurence of the single words. It is well known that, in the latter case, the lae of Zipf is valid (i.e., a power law). We prove that in the case of m-word phrases (m>=2), this is not the case. We present 2 independent proof of this
  5. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.02
    0.015253598 = product of:
      0.04576079 = sum of:
        0.04576079 = product of:
          0.09152158 = sum of:
            0.09152158 = weight(_text_:van in 4384) [ClassicSimilarity], result of:
              0.09152158 = score(doc=4384,freq=2.0), product of:
                0.24757032 = queryWeight, product of:
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.044394575 = queryNorm
                0.36967915 = fieldWeight in 4384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5765896 = idf(docFreq=454, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4384)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  6. Egghe, L.: Existence theorem of the quadruple (P, R, F, M) : precision, recall, fallout and miss (2007) 0.01
    0.014879826 = product of:
      0.04463948 = sum of:
        0.04463948 = weight(_text_:m in 2011) [ClassicSimilarity], result of:
          0.04463948 = score(doc=2011,freq=12.0), product of:
            0.11047362 = queryWeight, product of:
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.044394575 = queryNorm
            0.40407366 = fieldWeight in 2011, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.046875 = fieldNorm(doc=2011)
      0.33333334 = coord(1/3)
    
    Abstract
    In an earlier paper [Egghe, L. (2004). A universal method of information retrieval evaluation: the "missing" link M and the universal IR surface. Information Processing and Management, 40, 21-30] we showed that, given an IR system, and if P denotes precision, R recall, F fallout and M miss (re-introduced in the paper mentioned above), we have the following relationship between P, R, F and M: P/(1-P)*(1-R)/R*F/(1-F)*(1-M)/M = 1. In this paper we prove the (more difficult) converse: given any four rational numbers in the interval ]0, 1[ satisfying the above equation, then there exists an IR system such that these four numbers (in any order) are the precision, recall, fallout and miss of this IR system. As a consequence we show that any three rational numbers in ]0, 1[ represent any three measures taken from precision, recall, fallout and miss of a certain IR system. We also show that this result is also true for two numbers instead of three.
  7. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.01
    0.010024754 = product of:
      0.030074261 = sum of:
        0.030074261 = product of:
          0.060148522 = sum of:
            0.060148522 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.060148522 = score(doc=4992,freq=2.0), product of:
                0.15546224 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044394575 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    14. 2.2012 12:53:22
  8. Egghe, L.; Rousseau, R.: Introduction to informetrics : quantitative methods in library, documentation and information science (1990) 0.01
    0.0070871087 = product of:
      0.021261325 = sum of:
        0.021261325 = weight(_text_:m in 1515) [ClassicSimilarity], result of:
          0.021261325 = score(doc=1515,freq=2.0), product of:
            0.11047362 = queryWeight, product of:
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.044394575 = queryNorm
            0.19245613 = fieldWeight in 1515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1515)
      0.33333334 = coord(1/3)
    
    Type
    m
  9. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.01
    0.006014852 = product of:
      0.018044556 = sum of:
        0.018044556 = product of:
          0.03608911 = sum of:
            0.03608911 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.03608911 = score(doc=7659,freq=2.0), product of:
                0.15546224 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044394575 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  10. Rousseau, R.; Egghe, L.; Guns, R.: Becoming metric-wise : a bibliometric guide for researchers (2018) 0.01
    0.0050622206 = product of:
      0.015186661 = sum of:
        0.015186661 = weight(_text_:m in 5226) [ClassicSimilarity], result of:
          0.015186661 = score(doc=5226,freq=2.0), product of:
            0.11047362 = queryWeight, product of:
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.044394575 = queryNorm
            0.13746867 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.4884486 = idf(docFreq=9980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
      0.33333334 = coord(1/3)
    
    Type
    m