Search (44 results, page 1 of 3)

  • × author_ss:"Egghe, L."
  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  1. Egghe, L.: ¬A rationale for the Hirsch-index rank-order distribution and a comparison with the impact factor rank-order distribution (2009) 0.04
    0.043737486 = product of:
      0.17494994 = sum of:
        0.017463053 = weight(_text_:of in 3124) [ClassicSimilarity], result of:
          0.017463053 = score(doc=3124,freq=10.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2704316 = fieldWeight in 3124, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3124)
        0.15748689 = sum of:
          0.01544937 = weight(_text_:on in 3124) [ClassicSimilarity], result of:
            0.01544937 = score(doc=3124,freq=2.0), product of:
              0.090823986 = queryWeight, product of:
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.041294612 = queryNorm
              0.17010231 = fieldWeight in 3124, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.199415 = idf(docFreq=13325, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3124)
          0.14203751 = weight(_text_:line in 3124) [ClassicSimilarity], result of:
            0.14203751 = score(doc=3124,freq=4.0), product of:
              0.23157367 = queryWeight, product of:
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.041294612 = queryNorm
              0.6133578 = fieldWeight in 3124, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.6078424 = idf(docFreq=440, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3124)
      0.25 = coord(2/8)
    
    Abstract
    We present a rationale for the Hirsch-index rank-order distribution and prove that it is a power law (hence a straight line in the log-log scale). This is confirmed by experimental data of Pyykkö and by data produced in this article on 206 mathematics journals. This distribution is of a completely different nature than the impact factor (IF) rank-order distribution which (as proved in a previous article) is S-shaped. This is also confirmed by our example. Only in the log-log scale of the h-index distribution do we notice a concave deviation of the straight line for higher ranks. This phenomenon is discussed.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.10, S.2142-2144
  2. Egghe, L.: Type/Token-Taken informetrics (2003) 0.03
    0.028658554 = product of:
      0.07642281 = sum of:
        0.020873476 = weight(_text_:retrieval in 1608) [ClassicSimilarity], result of:
          0.020873476 = score(doc=1608,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 1608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
        0.037047986 = weight(_text_:use in 1608) [ClassicSimilarity], result of:
          0.037047986 = score(doc=1608,freq=6.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.29299045 = fieldWeight in 1608, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
        0.01850135 = weight(_text_:of in 1608) [ClassicSimilarity], result of:
          0.01850135 = score(doc=1608,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.28651062 = fieldWeight in 1608, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.375 = coord(3/8)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.603-610
  3. Egghe, L.: Theory of the topical coverage of multiple databases (2013) 0.02
    0.024112135 = product of:
      0.064299025 = sum of:
        0.029945528 = weight(_text_:use in 526) [ClassicSimilarity], result of:
          0.029945528 = score(doc=526,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=526)
        0.02342914 = weight(_text_:of in 526) [ClassicSimilarity], result of:
          0.02342914 = score(doc=526,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.36282203 = fieldWeight in 526, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=526)
        0.010924355 = product of:
          0.02184871 = sum of:
            0.02184871 = weight(_text_:on in 526) [ClassicSimilarity], result of:
              0.02184871 = score(doc=526,freq=4.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.24056101 = fieldWeight in 526, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=526)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    We present a model that describes which fraction of the literature on a certain topic we will find when we use n (n = 1, 2, .) databases. It is a generalization of the theory of discovering usability problems. We prove that, in all practical cases, this fraction is a concave function of n, the number of used databases, thereby explaining some graphs that exist in the literature. We also study limiting features of this fraction for n very high and we characterize the case that we find all literature on a certain topic for n high enough.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.1, S.126-131
  4. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.02
    0.018499356 = product of:
      0.049331617 = sum of:
        0.02592591 = weight(_text_:of in 7659) [ClassicSimilarity], result of:
          0.02592591 = score(doc=7659,freq=30.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.4014868 = fieldWeight in 7659, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.006621159 = product of:
          0.013242318 = sum of:
            0.013242318 = weight(_text_:on in 7659) [ClassicSimilarity], result of:
              0.013242318 = score(doc=7659,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.14580199 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
        0.016784549 = product of:
          0.033569098 = sum of:
            0.033569098 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.033569098 = score(doc=7659,freq=2.0), product of:
                0.1446067 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041294612 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    It is possible, using ISI's Journal Citation Report (JCR), to calculate average impact factors (AIF) for LCR's subject categories but it can be more useful to know the global Impact Factor (GIF) of a subject category and compare the 2 values. Reports results of a study to compare the relationships between AIFs and GIFs of subjects, based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference being studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact factor of a field, and GQ becomes its global impact factor. Discusses a number of applications of this technique in the context of informetrics and scientometrics
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  5. Egghe, L.: Untangling Herdan's law and Heaps' law : mathematical and informetric arguments (2007) 0.02
    0.016511794 = product of:
      0.044031452 = sum of:
        0.020873476 = weight(_text_:retrieval in 271) [ClassicSimilarity], result of:
          0.020873476 = score(doc=271,freq=2.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.16710453 = fieldWeight in 271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
        0.017640345 = weight(_text_:of in 271) [ClassicSimilarity], result of:
          0.017640345 = score(doc=271,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.27317715 = fieldWeight in 271, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
        0.0055176322 = product of:
          0.0110352645 = sum of:
            0.0110352645 = weight(_text_:on in 271) [ClassicSimilarity], result of:
              0.0110352645 = score(doc=271,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.121501654 = fieldWeight in 271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=271)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Herdan's law in linguistics and Heaps' law in information retrieval are different formulations of the same phenomenon. Stated briefly and in linguistic terms they state that vocabularies' sizes are concave increasing power laws of texts' sizes. This study investigates these laws from a purely mathematical and informetric point of view. A general informetric argument shows that the problem of proving these laws is, in fact, ill-posed. Using the more general terminology of sources and items, the author shows by presenting exact formulas from Lotkaian informetrics that the total number T of sources is not only a function of the total number A of items, but is also a function of several parameters (e.g., the parameters occurring in Lotka's law). Consequently, it is shown that a fixed T(or A) value can lead to different possible A (respectively, T) values. Limiting the T(A)-variability to increasing samples (e.g., in a text as done in linguistics) the author then shows, in a purely mathematical way, that for large sample sizes T~ A**phi, where phi is a constant, phi < 1 but close to 1, hence roughly, Heaps' or Herdan's law can be proved without using any linguistic or informetric argument. The author also shows that for smaller samples, a is not a constant but essentially decreases as confirmed by practical examples. Finally, an exact informetric argument on random sampling in the items shows that, in most cases, T= T(A) is a concavely increasing function, in accordance with practical examples.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.5, S.702-709
  6. Egghe, L.; Rousseau, R.: Duality in information retrieval and the hypegeometric distribution (1997) 0.01
    0.014039168 = product of:
      0.056156673 = sum of:
        0.047231287 = weight(_text_:retrieval in 647) [ClassicSimilarity], result of:
          0.047231287 = score(doc=647,freq=4.0), product of:
            0.124912694 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.041294612 = queryNorm
            0.37811437 = fieldWeight in 647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=647)
        0.008925388 = weight(_text_:of in 647) [ClassicSimilarity], result of:
          0.008925388 = score(doc=647,freq=2.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.13821793 = fieldWeight in 647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=647)
      0.25 = coord(2/8)
    
    Abstract
    Asserts that duality is an important topic in informetrics, especially in connection with the classical informetric laws. Yet this concept is less studied in information retrieval. It deals with the unification or symmetry between queries and documents, search formulation versus indexing, and relevant versus retrieved documents. Elaborates these ideas and highlights the connection with the hypergeometric distribution
    Source
    Journal of documentation. 53(1997) no.5, S.499-496
  7. Egghe, L.: ¬A new short proof of Naranan's theorem, explaining Lotka's law and Zipf's law (2010) 0.01
    0.012268836 = product of:
      0.049075343 = sum of:
        0.029945528 = weight(_text_:use in 3432) [ClassicSimilarity], result of:
          0.029945528 = score(doc=3432,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.23682132 = fieldWeight in 3432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3432)
        0.019129815 = weight(_text_:of in 3432) [ClassicSimilarity], result of:
          0.019129815 = score(doc=3432,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.29624295 = fieldWeight in 3432, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3432)
      0.25 = coord(2/8)
    
    Abstract
    Naranan's important theorem, published in Nature in 1970, states that if the number of journals grows exponentially and if the number of articles in each journal grows exponentially (at the same rate for each journal), then the system satisfies Lotka's law and a formula for the Lotka's exponent is given in function of the growth rates of the journals and the articles. This brief communication re-proves this result by showing that the system satisfies Zipf's law, which is equivalent with Lotka's law. The proof is short and algebraic and does not use infinitesimal arguments.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.12, S.2581-2583
  8. Egghe, L.; Guns, R.: Applications of the generalized law of Benford to informetric data (2012) 0.01
    0.011967305 = product of:
      0.04786922 = sum of:
        0.025667597 = weight(_text_:use in 376) [ClassicSimilarity], result of:
          0.025667597 = score(doc=376,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.20298971 = fieldWeight in 376, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.046875 = fieldNorm(doc=376)
        0.022201622 = weight(_text_:of in 376) [ClassicSimilarity], result of:
          0.022201622 = score(doc=376,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34381276 = fieldWeight in 376, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=376)
      0.25 = coord(2/8)
    
    Abstract
    In a previous work (Egghe, 2011), the first author showed that Benford's law (describing the logarithmic distribution of the numbers 1, 2, ... , 9 as first digits of data in decimal form) is related to the classical law of Zipf with exponent 1. The work of Campanario and Coslado (2011), however, shows that Benford's law does not always fit practical data in a statistical sense. In this article, we use a generalization of Benford's law related to the general law of Zipf with exponent ? > 0. Using data from Campanario and Coslado, we apply nonlinear least squares to determine the optimal ? and show that this generalized law of Benford fits the data better than the classical law of Benford.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1662-1665
  9. Egghe, L.: Mathematical theories of citation (1998) 0.01
    0.011108146 = product of:
      0.044432584 = sum of:
        0.026776161 = weight(_text_:of in 5125) [ClassicSimilarity], result of:
          0.026776161 = score(doc=5125,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.41465375 = fieldWeight in 5125, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5125)
        0.017656423 = product of:
          0.035312846 = sum of:
            0.035312846 = weight(_text_:on in 5125) [ClassicSimilarity], result of:
              0.035312846 = score(doc=5125,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.3888053 = fieldWeight in 5125, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5125)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Focuses on possible mathematical theories of citation and on the intrinsic problems related to it. Sheds light on aspects of mathematical complexity as encountered in, for example, fractal theory and Mandelbrot's law. Also discusses dynamical aspects of citation theory as reflected in evolutions of journal rankings, centres of gravity or of the set of source journals. Makes some comments in this connection on growth and obsolescence
    Footnote
    Contribution to a thematic issue devoted to 'Theories of citation?'
  10. Egghe, L.: Sampling and concentration values of incomplete bibliographies (2002) 0.01
    0.009519008 = product of:
      0.038076032 = sum of:
        0.024696484 = weight(_text_:of in 450) [ClassicSimilarity], result of:
          0.024696484 = score(doc=450,freq=20.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.38244802 = fieldWeight in 450, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=450)
        0.013379549 = product of:
          0.026759097 = sum of:
            0.026759097 = weight(_text_:on in 450) [ClassicSimilarity], result of:
              0.026759097 = score(doc=450,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29462588 = fieldWeight in 450, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=450)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This article studies concentration aspects of bibliographies. More, in particular, we study the impact of incompleteness of such a bibliography on its concentration values (i.e., its degree of inequality of production of its sources). Incompleteness is modeled by sampling in the complete bibliography. The model is general enough to comprise truncation of a bibliography as well as a systematic sample on sources or items. In all cases we prove that the sampled bibliography (or incomplete one) has a higher concentration value than the complete one. These models, hence, shed some light on the measurement of production inequality in incomplete bibliographies.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.4, S.271-281
  11. Egghe, L.: Mathematical study of h-index sequences (2009) 0.01
    0.009291917 = product of:
      0.03716767 = sum of:
        0.021389665 = weight(_text_:use in 4217) [ClassicSimilarity], result of:
          0.021389665 = score(doc=4217,freq=2.0), product of:
            0.12644777 = queryWeight, product of:
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.041294612 = queryNorm
            0.1691581 = fieldWeight in 4217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0620887 = idf(docFreq=5623, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4217)
        0.015778005 = weight(_text_:of in 4217) [ClassicSimilarity], result of:
          0.015778005 = score(doc=4217,freq=16.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.24433708 = fieldWeight in 4217, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4217)
      0.25 = coord(2/8)
    
    Abstract
    This paper studies mathematical properties of h-index sequences as developed by Liang [Liang, L. (2006). h-Index sequence and h-index matrix: Constructions and applications. Scientometrics, 69(1), 153-159]. For practical reasons, Liming studies such sequences where the time goes backwards while it is more logical to use the time going forward (real career periods). Both type of h-index sequences are studied here and their interrelations are revealed. We show cases where these sequences are convex, linear and concave. We also show that, when one of the sequences is convex then the other one is concave, showing that the reverse-time sequence, in general, cannot be used to derive similar properties of the (difficult to obtain) forward time sequence. We show that both sequences are the same if and only if the author produces the same number of papers per year. If the author produces an increasing number of papers per year, then Liang's h-sequences are above the "normal" ones. All these results are also valid for g- and R-sequences. The results are confirmed by the h-, g- and R-sequences (forward and reverse time) of the author.
  12. Egghe, L.: Special features of the author - publication relationship and a new explanation of Lotka's law based on convolution theory (1994) 0.01
    0.009107789 = product of:
      0.036431156 = sum of:
        0.023188837 = weight(_text_:of in 5068) [ClassicSimilarity], result of:
          0.023188837 = score(doc=5068,freq=6.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.3591007 = fieldWeight in 5068, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=5068)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 5068) [ClassicSimilarity], result of:
              0.026484637 = score(doc=5068,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 5068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5068)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Source
    Journal of the American Society for Information Science. 45(1994) no.6, S.422-427
  13. Egghe, L.: ¬The influence of transformations on the h-index and the g-index (2008) 0.01
    0.009100684 = product of:
      0.036402736 = sum of:
        0.019129815 = weight(_text_:of in 1881) [ClassicSimilarity], result of:
          0.019129815 = score(doc=1881,freq=12.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.29624295 = fieldWeight in 1881, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1881)
        0.017272921 = product of:
          0.034545843 = sum of:
            0.034545843 = weight(_text_:on in 1881) [ClassicSimilarity], result of:
              0.034545843 = score(doc=1881,freq=10.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.38036036 = fieldWeight in 1881, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1881)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In a previous article, we introduced a general transformation on sources and one on items in an arbitrary information production process (IPP). In this article, we investigate the influence of these transformations on the h-index and on the g-index. General formulae that describe this influence are presented. These are applied to the case that the size-frequency function is Lotkaian (i.e., is a decreasing power function). We further show that the h-index of the transformed IPP belongs to the interval bounded by the two transformations of the h-index of the original IPP, and we also show that this property is not true for the g-index.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1304-1312
  14. Egghe, L.: On the law of Zipf-Mandelbrot for multi-word phrases (1999) 0.01
    0.008901093 = product of:
      0.035604373 = sum of:
        0.026776161 = weight(_text_:of in 3058) [ClassicSimilarity], result of:
          0.026776161 = score(doc=3058,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.41465375 = fieldWeight in 3058, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3058)
        0.008828212 = product of:
          0.017656423 = sum of:
            0.017656423 = weight(_text_:on in 3058) [ClassicSimilarity], result of:
              0.017656423 = score(doc=3058,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.19440265 = fieldWeight in 3058, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3058)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This article studies the probabilities of the occurence of multi-word (m-word) phrases (m=2,3,...) in relation to the probabilities of occurence of the single words. It is well known that, in the latter case, the lae of Zipf is valid (i.e., a power law). We prove that in the case of m-word phrases (m>=2), this is not the case. We present 2 independent proof of this
    Source
    Journal of the American Society for Information Science. 50(1999) no.3, S.233-241
  15. Egghe, L.; Ravichandra Rao, I.K.: Study of different h-indices for groups of authors (2008) 0.01
    0.008860985 = product of:
      0.03544394 = sum of:
        0.022201622 = weight(_text_:of in 1878) [ClassicSimilarity], result of:
          0.022201622 = score(doc=1878,freq=22.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.34381276 = fieldWeight in 1878, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1878)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 1878) [ClassicSimilarity], result of:
              0.026484637 = score(doc=1878,freq=8.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 1878, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1878)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In this article, for any group of authors, we define three different h-indices. First, there is the successive h-index h2 based on the ranked list of authors and their h-indices h1 as defined by Schubert (2007). Next, there is the h-index hP based on the ranked list of authors and their number of publications. Finally, there is the h-index hC based on the ranked list of authors and their number of citations. We present formulae for these three indices in Lotkaian informetrics from which it also follows that h2 < hp < hc. We give a concrete example of a group of 167 authors on the topic optical flow estimation. Besides these three h-indices, we also calculate the two-by-two Spearman rank correlation coefficient and prove that these rankings are significantly related.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.8, S.1276-1281
  16. Egghe, L.; Ravichandra Rao, I.K.: ¬The influence of the broadness of a query of a topic on its h-index : models and examples of the h-index of n-grams (2008) 0.01
    0.0081392545 = product of:
      0.032557018 = sum of:
        0.023000197 = weight(_text_:of in 2009) [ClassicSimilarity], result of:
          0.023000197 = score(doc=2009,freq=34.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.35617945 = fieldWeight in 2009, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2009)
        0.00955682 = product of:
          0.01911364 = sum of:
            0.01911364 = weight(_text_:on in 2009) [ClassicSimilarity], result of:
              0.01911364 = score(doc=2009,freq=6.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.21044704 = fieldWeight in 2009, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2009)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The article studies the influence of the query formulation of a topic on its h-index. In order to generate pure random sets of documents, we used N-grams (N variable) to measure this influence: strings of zeros, truncated at the end. The used databases are WoS and Scopus. The formula h=T**1/alpha, proved in Egghe and Rousseau (2006) where T is the number of retrieved documents and is Lotka's exponent, is confirmed being a concavely increasing function of T. We also give a formula for the relation between h and N the length of the N-gram: h=D10**(-N/alpha) where D is a constant, a convexly decreasing function, which is found in our experiments. Nonlinear regression on h=T**1/alpha gives an estimation of , which can then be used to estimate the h-index of the entire database (Web of Science [WoS] and Scopus): h=S**1/alpha, , where S is the total number of documents in the database.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.10, S.1688-1693
  17. Egghe, L.: Expansion of the field of informetrics : the second special issue (2006) 0.01
    0.008043981 = product of:
      0.032175925 = sum of:
        0.018933605 = weight(_text_:of in 7119) [ClassicSimilarity], result of:
          0.018933605 = score(doc=7119,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 7119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 7119) [ClassicSimilarity], result of:
              0.026484637 = score(doc=7119,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 7119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7119)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Footnote
    Einführung in ein "Special Issue on Informetrics"
  18. Egghe, L.: Expansion of the field of informetrics : origins and consequences (2005) 0.01
    0.008043981 = product of:
      0.032175925 = sum of:
        0.018933605 = weight(_text_:of in 1910) [ClassicSimilarity], result of:
          0.018933605 = score(doc=1910,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 1910, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 1910) [ClassicSimilarity], result of:
              0.026484637 = score(doc=1910,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 1910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1910)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Footnote
    Einführung in ein "Special Issue on Infometrics"
  19. Egghe, L.: Note on a possible decomposition of the h-Index (2013) 0.01
    0.008043981 = product of:
      0.032175925 = sum of:
        0.018933605 = weight(_text_:of in 683) [ClassicSimilarity], result of:
          0.018933605 = score(doc=683,freq=4.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.2932045 = fieldWeight in 683, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=683)
        0.013242318 = product of:
          0.026484637 = sum of:
            0.026484637 = weight(_text_:on in 683) [ClassicSimilarity], result of:
              0.026484637 = score(doc=683,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.29160398 = fieldWeight in 683, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.09375 = fieldNorm(doc=683)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.871
  20. Egghe, L.; Rousseau, R.; Rousseau, S.: TOP-curves (2007) 0.01
    0.0077884565 = product of:
      0.031153826 = sum of:
        0.02342914 = weight(_text_:of in 50) [ClassicSimilarity], result of:
          0.02342914 = score(doc=50,freq=18.0), product of:
            0.06457475 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.041294612 = queryNorm
            0.36282203 = fieldWeight in 50, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
        0.007724685 = product of:
          0.01544937 = sum of:
            0.01544937 = weight(_text_:on in 50) [ClassicSimilarity], result of:
              0.01544937 = score(doc=50,freq=2.0), product of:
                0.090823986 = queryWeight, product of:
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.041294612 = queryNorm
                0.17010231 = fieldWeight in 50, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.199415 = idf(docFreq=13325, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=50)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Several characteristics of classical Lorenz curves make them unsuitable for the study of a group of topperformers. TOP-curves, defined as a kind of mirror image of TIP-curves used in poverty studies, are shown to possess the properties necessary for adequate empirical ranking of various data arrays, based on the properties of the highest performers (i.e., the core). TOP-curves and essential TOP-curves, also introduced in this article, simultaneously represent the incidence, intensity, and inequality among the top. It is shown that TOPdominance partial order, introduced in this article, is stronger than Lorenz dominance order. In this way, this article contributes to the study of cores, a central issue in applied informetrics.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.777-785