Search (60 results, page 1 of 3)

  • × author_ss:"Egghe, L."
  1. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.02
    0.021526882 = product of:
      0.035878137 = sum of:
        0.006567457 = weight(_text_:s in 4992) [ClassicSimilarity], result of:
          0.006567457 = score(doc=4992,freq=4.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16988087 = fieldWeight in 4992, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=4992)
        0.0052230875 = weight(_text_:a in 4992) [ClassicSimilarity], result of:
          0.0052230875 = score(doc=4992,freq=2.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.12739488 = fieldWeight in 4992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4992)
        0.024087591 = product of:
          0.048175182 = sum of:
            0.048175182 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.048175182 = score(doc=4992,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Date
    14. 2.2012 12:53:22
    Footnote
    This article corrects: Thoughts on uncitedness: Nobel laureates and Fields medalists as case studies in: JASIST 62(2011) no,8, S.1637-1644.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.429
    Type
    a
  2. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.02
    0.01598427 = product of:
      0.026640449 = sum of:
        0.002786336 = weight(_text_:s in 7659) [ClassicSimilarity], result of:
          0.002786336 = score(doc=7659,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.072074346 = fieldWeight in 7659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.009401558 = weight(_text_:a in 7659) [ClassicSimilarity], result of:
          0.009401558 = score(doc=7659,freq=18.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.22931081 = fieldWeight in 7659, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.014452554 = product of:
          0.028905109 = sum of:
            0.028905109 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.028905109 = score(doc=7659,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    It is possible, using ISI's Journal Citation Report (JCR), to calculate average impact factors (AIF) for LCR's subject categories but it can be more useful to know the global Impact Factor (GIF) of a subject category and compare the 2 values. Reports results of a study to compare the relationships between AIFs and GIFs of subjects, based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference being studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact factor of a field, and GQ becomes its global impact factor. Discusses a number of applications of this technique in the context of informetrics and scientometrics
    Source
    Journal of information science. 22(1996) no.3, S.165-170
    Type
    a
  3. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.01
    0.01454784 = product of:
      0.024246398 = sum of:
        0.002786336 = weight(_text_:s in 2558) [ClassicSimilarity], result of:
          0.002786336 = score(doc=2558,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.072074346 = fieldWeight in 2558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.007007508 = weight(_text_:a in 2558) [ClassicSimilarity], result of:
          0.007007508 = score(doc=2558,freq=10.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.1709182 = fieldWeight in 2558, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.014452554 = product of:
          0.028905109 = sum of:
            0.028905109 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.028905109 = score(doc=2558,freq=2.0), product of:
                0.124515474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035557263 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
    Source
    Information processing and management. 40(2004) no.1, S.21-30
    Type
    a
  4. Egghe, L.: ¬A good normalized impact and concentration measure (2014) 0.01
    0.0069751176 = product of:
      0.017437793 = sum of:
        0.004643894 = weight(_text_:s in 1508) [ClassicSimilarity], result of:
          0.004643894 = score(doc=1508,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.120123915 = fieldWeight in 1508, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=1508)
        0.0127939 = weight(_text_:a in 1508) [ClassicSimilarity], result of:
          0.0127939 = score(doc=1508,freq=12.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.3120525 = fieldWeight in 1508, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=1508)
      0.4 = coord(2/5)
    
    Abstract
    It is shown that a normalized version of the g-index is a good normalized impact and concentration measure. A proposal for such a measure by Bartolucci is improved.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.10, S.2052-2054
    Type
    a
  5. Egghe, L.: ¬A rationale for the Hirsch-index rank-order distribution and a comparison with the impact factor rank-order distribution (2009) 0.01
    0.006226282 = product of:
      0.0155657055 = sum of:
        0.0045972206 = weight(_text_:s in 3124) [ClassicSimilarity], result of:
          0.0045972206 = score(doc=3124,freq=4.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.118916616 = fieldWeight in 3124, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3124)
        0.010968485 = weight(_text_:a in 3124) [ClassicSimilarity], result of:
          0.010968485 = score(doc=3124,freq=18.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.26752928 = fieldWeight in 3124, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3124)
      0.4 = coord(2/5)
    
    Abstract
    We present a rationale for the Hirsch-index rank-order distribution and prove that it is a power law (hence a straight line in the log-log scale). This is confirmed by experimental data of Pyykkö and by data produced in this article on 206 mathematics journals. This distribution is of a completely different nature than the impact factor (IF) rank-order distribution which (as proved in a previous article) is S-shaped. This is also confirmed by our example. Only in the log-log scale of the h-index distribution do we notice a concave deviation of the straight line for higher ranks. This phenomenon is discussed.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2142-2144
    Type
    a
  6. Egghe, L.: Special features of the author - publication relationship and a new explanation of Lotka's law based on convolution theory (1994) 0.01
    0.0057746186 = product of:
      0.014436546 = sum of:
        0.005572672 = weight(_text_:s in 5068) [ClassicSimilarity], result of:
          0.005572672 = score(doc=5068,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 5068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=5068)
        0.008863874 = weight(_text_:a in 5068) [ClassicSimilarity], result of:
          0.008863874 = score(doc=5068,freq=4.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.2161963 = fieldWeight in 5068, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=5068)
      0.4 = coord(2/5)
    
    Source
    Journal of the American Society for Information Science. 45(1994) no.6, S.422-427
    Type
    a
  7. Egghe, L.: Note on a possible decomposition of the h-Index (2013) 0.01
    0.0057746186 = product of:
      0.014436546 = sum of:
        0.005572672 = weight(_text_:s in 683) [ClassicSimilarity], result of:
          0.005572672 = score(doc=683,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 683, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=683)
        0.008863874 = weight(_text_:a in 683) [ClassicSimilarity], result of:
          0.008863874 = score(doc=683,freq=4.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.2161963 = fieldWeight in 683, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=683)
      0.4 = coord(2/5)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.871
    Type
    a
  8. Egghe, L.: Dynamic h-index : the Hirsch index in function of time (2007) 0.01
    0.0055800937 = product of:
      0.013950234 = sum of:
        0.003715115 = weight(_text_:s in 147) [ClassicSimilarity], result of:
          0.003715115 = score(doc=147,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=147)
        0.01023512 = weight(_text_:a in 147) [ClassicSimilarity], result of:
          0.01023512 = score(doc=147,freq=12.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.24964198 = fieldWeight in 147, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=147)
      0.4 = coord(2/5)
    
    Abstract
    When there are a group of articles and the present time is fixed we can determine the unique number h being the number of articles that received h or more citations while the other articles received a number of citations which is not larger than h. In this article, the time dependence of the h-index is determined. This is important to describe the expected career evolution of a scientist's work or of a journal's production in a fixed year.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.3, S.452-454
    Type
    a
  9. Egghe, L.: Little science, big science and beyond (1994) 0.01
    0.0055255094 = product of:
      0.013813773 = sum of:
        0.0065014507 = weight(_text_:s in 6883) [ClassicSimilarity], result of:
          0.0065014507 = score(doc=6883,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.16817348 = fieldWeight in 6883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=6883)
        0.007312323 = weight(_text_:a in 6883) [ClassicSimilarity], result of:
          0.007312323 = score(doc=6883,freq=2.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.17835285 = fieldWeight in 6883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=6883)
      0.4 = coord(2/5)
    
    Source
    Scientometrics. 30(1994) nos.2/3, S.389-392
    Type
    a
  10. Egghe, L.: Untangling Herdan's law and Heaps' law : mathematical and informetric arguments (2007) 0.01
    0.005360716 = product of:
      0.01340179 = sum of:
        0.002321947 = weight(_text_:s in 271) [ClassicSimilarity], result of:
          0.002321947 = score(doc=271,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.060061958 = fieldWeight in 271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
        0.011079842 = weight(_text_:a in 271) [ClassicSimilarity], result of:
          0.011079842 = score(doc=271,freq=36.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.27024537 = fieldWeight in 271, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
      0.4 = coord(2/5)
    
    Abstract
    Herdan's law in linguistics and Heaps' law in information retrieval are different formulations of the same phenomenon. Stated briefly and in linguistic terms they state that vocabularies' sizes are concave increasing power laws of texts' sizes. This study investigates these laws from a purely mathematical and informetric point of view. A general informetric argument shows that the problem of proving these laws is, in fact, ill-posed. Using the more general terminology of sources and items, the author shows by presenting exact formulas from Lotkaian informetrics that the total number T of sources is not only a function of the total number A of items, but is also a function of several parameters (e.g., the parameters occurring in Lotka's law). Consequently, it is shown that a fixed T(or A) value can lead to different possible A (respectively, T) values. Limiting the T(A)-variability to increasing samples (e.g., in a text as done in linguistics) the author then shows, in a purely mathematical way, that for large sample sizes T~ A**phi, where phi is a constant, phi < 1 but close to 1, hence roughly, Heaps' or Herdan's law can be proved without using any linguistic or informetric argument. The author also shows that for smaller samples, a is not a constant but essentially decreases as confirmed by practical examples. Finally, an exact informetric argument on random sampling in the items shows that, in most cases, T= T(A) is a concavely increasing function, in accordance with practical examples.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.5, S.702-709
    Type
    a
  11. Egghe, L.; Liang, L.; Rousseau, R.: ¬A relation between h-index and impact factor in the power-law model (2009) 0.01
    0.0052233837 = product of:
      0.013058458 = sum of:
        0.003715115 = weight(_text_:s in 6759) [ClassicSimilarity], result of:
          0.003715115 = score(doc=6759,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 6759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=6759)
        0.009343344 = weight(_text_:a in 6759) [ClassicSimilarity], result of:
          0.009343344 = score(doc=6759,freq=10.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.22789092 = fieldWeight in 6759, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6759)
      0.4 = coord(2/5)
    
    Abstract
    Using a power-law model, the two best-known topics in citation analysis, namely the impact factor and the Hirsch index, are unified into one relation (not a function). The validity of our model is, at least in a qualitative way, confirmed by real data.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.11, S.2362-2365
    Type
    a
  12. Egghe, L.: On the relation between the association strength and other similarity measures (2010) 0.01
    0.0052233837 = product of:
      0.013058458 = sum of:
        0.003715115 = weight(_text_:s in 3598) [ClassicSimilarity], result of:
          0.003715115 = score(doc=3598,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 3598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=3598)
        0.009343344 = weight(_text_:a in 3598) [ClassicSimilarity], result of:
          0.009343344 = score(doc=3598,freq=10.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.22789092 = fieldWeight in 3598, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3598)
      0.4 = coord(2/5)
    
    Abstract
    A graph in van Eck and Waltman [JASIST, 60(8), 2009, p. 1644], representing the relation between the association strength and the cosine, is partially explained as a sheaf of parabolas, each parabola being the functional relation between these similarity measures on the trajectories x*y=a, a constant. Based on earlier obtained relations between cosine and other similarity measures (e.g., Jaccard index), we can prove new relations between the association strength and these other measures.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.7, S.1502-1504
    Type
    a
  13. Egghe, L.; Rousseau, R.: Topological aspects of information retrieval (1998) 0.01
    0.005169608 = product of:
      0.01292402 = sum of:
        0.0032507253 = weight(_text_:s in 2157) [ClassicSimilarity], result of:
          0.0032507253 = score(doc=2157,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.08408674 = fieldWeight in 2157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2157)
        0.009673295 = weight(_text_:a in 2157) [ClassicSimilarity], result of:
          0.009673295 = score(doc=2157,freq=14.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.23593865 = fieldWeight in 2157, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2157)
      0.4 = coord(2/5)
    
    Abstract
    Let (DS, DQ, sim) be a retrieval system consisting of a document space DS, a query space QS, and a function sim, expressing the similarity between a document and a query. Following D.M. Everett and S.C. Cater (1992), we introduce topologies on the document space. These topologies are generated by the similarity function sim and the query space QS. 3 topologies will be studied: the retrieval topology, the similarity topology and the (pseudo-)metric one. It is shown that the retrieval topology is the coarsest of the three, while the (pseudo-)metric is the strongest. These 3 topologies are generally different, reflecting distinct topological aspects of information retrieval. We present necessary and sufficient conditions for these topological aspects to be equal
    Source
    Journal of the American Society for Information Science. 49(1998) no.13, S.1144-1160
    Type
    a
  14. Egghe, L.: Theory of the topical coverage of multiple databases (2013) 0.00
    0.0048825825 = product of:
      0.012206456 = sum of:
        0.0032507253 = weight(_text_:s in 526) [ClassicSimilarity], result of:
          0.0032507253 = score(doc=526,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.08408674 = fieldWeight in 526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=526)
        0.00895573 = weight(_text_:a in 526) [ClassicSimilarity], result of:
          0.00895573 = score(doc=526,freq=12.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.21843673 = fieldWeight in 526, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=526)
      0.4 = coord(2/5)
    
    Abstract
    We present a model that describes which fraction of the literature on a certain topic we will find when we use n (n = 1, 2, .) databases. It is a generalization of the theory of discovering usability problems. We prove that, in all practical cases, this fraction is a concave function of n, the number of used databases, thereby explaining some graphs that exist in the literature. We also study limiting features of this fraction for n very high and we characterize the case that we find all literature on a certain topic for n high enough.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.126-131
    Type
    a
  15. Egghe, L.: New relations between similarity measures for vectors based on vector norms (2009) 0.00
    0.004875158 = product of:
      0.012187894 = sum of:
        0.002786336 = weight(_text_:s in 2708) [ClassicSimilarity], result of:
          0.002786336 = score(doc=2708,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.072074346 = fieldWeight in 2708, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=2708)
        0.009401558 = weight(_text_:a in 2708) [ClassicSimilarity], result of:
          0.009401558 = score(doc=2708,freq=18.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.22931081 = fieldWeight in 2708, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2708)
      0.4 = coord(2/5)
    
    Abstract
    The well-known similarity measures Jaccard, Salton's cosine, Dice, and several related overlap measures for vectors are compared. While general relations are not possible to prove, we study these measures on the trajectories of the form [X]=a[Y], where a > 0 is a constant and [·] denotes the Euclidean norm of a vector. In this case, direct functional relations between these measures are proved. For Jaccard, we prove that it is a convexly increasing function of Salton's cosine measure, but always smaller than or equal to the latter, hereby explaining a curve, experimentally found by Leydesdorff. All the other measures have a linear relation with Salton's cosine, reducing even to equality, in case a = 1. Hence, for equally normed vectors (e.g., for normalized vectors) we, essentially, only have Jaccard's measure and Salton's cosine measure since all the other measures are equal to the latter.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.2, S.232-239
    Type
    a
  16. Egghe, L.; Rousseau, R.: ¬The influence of publication delays on the observed aging distribution of scientific literature (2000) 0.00
    0.004828822 = product of:
      0.012072055 = sum of:
        0.003715115 = weight(_text_:s in 4385) [ClassicSimilarity], result of:
          0.003715115 = score(doc=4385,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 4385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=4385)
        0.00835694 = weight(_text_:a in 4385) [ClassicSimilarity], result of:
          0.00835694 = score(doc=4385,freq=8.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.20383182 = fieldWeight in 4385, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4385)
      0.4 = coord(2/5)
    
    Abstract
    Observed aging curves are influenced by publication delays. In this article, we show how the 'undisturbed' aging function and the publication delay combine to give the observed aging function. This combination is performed by a mathematical operation known as convolution. Examples are given, such as the convolution of 2 Poisson distributions, 2 exponential distributions, a 2 lognormal distributions. A paradox is observed between theory and real data
    Source
    Journal of the American Society for Information Science. 51(2000) no.2, S.158-165
    Type
    a
  17. Egghe, L.; Rousseau, R.: ¬An h-index weighted by citation impact (2008) 0.00
    0.004828822 = product of:
      0.012072055 = sum of:
        0.003715115 = weight(_text_:s in 695) [ClassicSimilarity], result of:
          0.003715115 = score(doc=695,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.09609913 = fieldWeight in 695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=695)
        0.00835694 = weight(_text_:a in 695) [ClassicSimilarity], result of:
          0.00835694 = score(doc=695,freq=8.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.20383182 = fieldWeight in 695, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=695)
      0.4 = coord(2/5)
    
    Abstract
    An h-type index is proposed which depends on the obtained citations of articles belonging to the h-core. This weighted h-index, denoted as hw, is presented in a continuous setting and in a discrete one. It is shown that in a continuous setting the new index enjoys many good properties. In the discrete setting some small deviations from the ideal may occur.
    Source
    Information processing and management. 44(2008) no.2, S.770-780
    Type
    a
  18. Egghe, L.; Rousseau, R.; Rousseau, S.: TOP-curves (2007) 0.00
    0.0047638174 = product of:
      0.011909544 = sum of:
        0.0045972206 = weight(_text_:s in 50) [ClassicSimilarity], result of:
          0.0045972206 = score(doc=50,freq=4.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.118916616 = fieldWeight in 50, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
        0.007312323 = weight(_text_:a in 50) [ClassicSimilarity], result of:
          0.007312323 = score(doc=50,freq=8.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.17835285 = fieldWeight in 50, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
      0.4 = coord(2/5)
    
    Abstract
    Several characteristics of classical Lorenz curves make them unsuitable for the study of a group of topperformers. TOP-curves, defined as a kind of mirror image of TIP-curves used in poverty studies, are shown to possess the properties necessary for adequate empirical ranking of various data arrays, based on the properties of the highest performers (i.e., the core). TOP-curves and essential TOP-curves, also introduced in this article, simultaneously represent the incidence, intensity, and inequality among the top. It is shown that TOPdominance partial order, introduced in this article, is stronger than Lorenz dominance order. In this way, this article contributes to the study of cores, a central issue in applied informetrics.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.777-785
    Type
    a
  19. Egghe, L.: Expansion of the field of informetrics : the second special issue (2006) 0.00
    0.004736151 = product of:
      0.011840377 = sum of:
        0.005572672 = weight(_text_:s in 7119) [ClassicSimilarity], result of:
          0.005572672 = score(doc=7119,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 7119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
        0.0062677055 = weight(_text_:a in 7119) [ClassicSimilarity], result of:
          0.0062677055 = score(doc=7119,freq=2.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.15287387 = fieldWeight in 7119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 42(2006) no.6, S.1405-1407
    Type
    a
  20. Egghe, L.: Expansion of the field of informetrics : origins and consequences (2005) 0.00
    0.004736151 = product of:
      0.011840377 = sum of:
        0.005572672 = weight(_text_:s in 1910) [ClassicSimilarity], result of:
          0.005572672 = score(doc=1910,freq=2.0), product of:
            0.038659193 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.035557263 = queryNorm
            0.14414869 = fieldWeight in 1910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
        0.0062677055 = weight(_text_:a in 1910) [ClassicSimilarity], result of:
          0.0062677055 = score(doc=1910,freq=2.0), product of:
            0.040999193 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.035557263 = queryNorm
            0.15287387 = fieldWeight in 1910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 41(2005) no.6, S.1311-1316
    Type
    a