Search (5 results, page 1 of 1)

  • × author_ss:"Egghe, L."
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.00
    6.94157E-4 = product of:
      0.01596561 = sum of:
        0.01596561 = product of:
          0.03193122 = sum of:
            0.03193122 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.03193122 = score(doc=4992,freq=2.0), product of:
                0.08253069 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.023567878 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    14. 2.2012 12:53:22
  2. Egghe, L.: Influence of adding or deleting items and sources on the h-index (2010) 0.00
    4.2027488E-4 = product of:
      0.009666322 = sum of:
        0.009666322 = product of:
          0.019332644 = sum of:
            0.019332644 = weight(_text_:29 in 3336) [ClassicSimilarity], result of:
              0.019332644 = score(doc=3336,freq=2.0), product of:
                0.08290443 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.023567878 = queryNorm
                0.23319192 = fieldWeight in 3336, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3336)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    31. 5.2010 15:02:29
  3. Egghe, L.: ¬The Hirsch index and related impact measures (2010) 0.00
    4.0990516E-4 = product of:
      0.0094278185 = sum of:
        0.0094278185 = product of:
          0.018855637 = sum of:
            0.018855637 = weight(_text_:1 in 1597) [ClassicSimilarity], result of:
              0.018855637 = score(doc=1597,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.32568932 = fieldWeight in 1597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1597)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Annual review of information science and technology. 44(2010) no.1, S.65-114
  4. Egghe, L.: Theory of the topical coverage of multiple databases (2013) 0.00
    3.381545E-4 = product of:
      0.0077775535 = sum of:
        0.0077775535 = product of:
          0.015555107 = sum of:
            0.015555107 = weight(_text_:1 in 526) [ClassicSimilarity], result of:
              0.015555107 = score(doc=526,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.26867998 = fieldWeight in 526, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=526)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    We present a model that describes which fraction of the literature on a certain topic we will find when we use n (n = 1, 2, .) databases. It is a generalization of the theory of discovering usability problems. We prove that, in all practical cases, this fraction is a concave function of n, the number of used databases, thereby explaining some graphs that exist in the literature. We also study limiting features of this fraction for n very high and we characterize the case that we find all literature on a certain topic for n high enough.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.126-131
  5. Egghe, L.; Guns, R.: Applications of the generalized law of Benford to informetric data (2012) 0.00
    2.8984674E-4 = product of:
      0.0066664745 = sum of:
        0.0066664745 = product of:
          0.013332949 = sum of:
            0.013332949 = weight(_text_:1 in 376) [ClassicSimilarity], result of:
              0.013332949 = score(doc=376,freq=4.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.23029712 = fieldWeight in 376, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=376)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    In a previous work (Egghe, 2011), the first author showed that Benford's law (describing the logarithmic distribution of the numbers 1, 2, ... , 9 as first digits of data in decimal form) is related to the classical law of Zipf with exponent 1. The work of Campanario and Coslado (2011), however, shows that Benford's law does not always fit practical data in a statistical sense. In this article, we use a generalization of Benford's law related to the general law of Zipf with exponent ? > 0. Using data from Campanario and Coslado, we apply nonlinear least squares to determine the optimal ? and show that this generalized law of Benford fits the data better than the classical law of Benford.