Search (16 results, page 1 of 1)

  • × author_ss:"Egghe, L."
  • × year_i:[2010 TO 2020}
  1. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.03
    0.034582928 = product of:
      0.069165856 = sum of:
        0.069165856 = sum of:
          0.006765375 = weight(_text_:a in 4992) [ClassicSimilarity], result of:
            0.006765375 = score(doc=4992,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12739488 = fieldWeight in 4992, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=4992)
          0.06240048 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
            0.06240048 = score(doc=4992,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.38690117 = fieldWeight in 4992, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4992)
      0.5 = coord(1/2)
    
    Date
    14. 2.2012 12:53:22
    Type
    a
  2. Egghe, L.: ¬A good normalized impact and concentration measure (2014) 0.00
    0.0041429293 = product of:
      0.008285859 = sum of:
        0.008285859 = product of:
          0.016571717 = sum of:
            0.016571717 = weight(_text_:a in 1508) [ClassicSimilarity], result of:
              0.016571717 = score(doc=1508,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.3120525 = fieldWeight in 1508, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1508)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    It is shown that a normalized version of the g-index is a good normalized impact and concentration measure. A proposal for such a measure by Bartolucci is improved.
    Type
    a
  3. Egghe, L.: On the relation between the association strength and other similarity measures (2010) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 3598) [ClassicSimilarity], result of:
              0.012102271 = score(doc=3598,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 3598, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3598)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A graph in van Eck and Waltman [JASIST, 60(8), 2009, p. 1644], representing the relation between the association strength and the cosine, is partially explained as a sheaf of parabolas, each parabola being the functional relation between these similarity measures on the trajectories x*y=a, a constant. Based on earlier obtained relations between cosine and other similarity measures (e.g., Jaccard index), we can prove new relations between the association strength and these other measures.
    Type
    a
  4. Egghe, L.: Theory of the topical coverage of multiple databases (2013) 0.00
    0.0029000505 = product of:
      0.005800101 = sum of:
        0.005800101 = product of:
          0.011600202 = sum of:
            0.011600202 = weight(_text_:a in 526) [ClassicSimilarity], result of:
              0.011600202 = score(doc=526,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.21843673 = fieldWeight in 526, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=526)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present a model that describes which fraction of the literature on a certain topic we will find when we use n (n = 1, 2, .) databases. It is a generalization of the theory of discovering usability problems. We prove that, in all practical cases, this fraction is a concave function of n, the number of used databases, thereby explaining some graphs that exist in the literature. We also study limiting features of this fraction for n very high and we characterize the case that we find all literature on a certain topic for n high enough.
    Type
    a
  5. Egghe, L.: Note on a possible decomposition of the h-Index (2013) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 683) [ClassicSimilarity], result of:
              0.011481222 = score(doc=683,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 683, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=683)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Egghe, L.: Remarks on the paper by A. De Visscher, "what does the g-index really measure?" (2012) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 463) [ClassicSimilarity], result of:
              0.010589487 = score(doc=463,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 463, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The author presents a different view on properties of impact measures than given in the paper of De Visscher (2011). He argues that a good impact measure works better when citations are concentrated rather than spread out over articles. The author also presents theoretical evidence that the g-index and the R-index can be close to the square root of the total number of citations, whereas this is not the case for the A-index. Here the author confirms an assertion of De Visscher.
    Type
    a
  7. Egghe, L.: Good properties of similarity measures and their complementarity (2010) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 3993) [ClassicSimilarity], result of:
              0.00994303 = score(doc=3993,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 3993, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3993)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Similarity measures, such as the ones of Jaccard, Dice, or Cosine, measure the similarity between two vectors. A good property for similarity measures would be that, if we add a constant vector to both vectors, then the similarity must increase. We show that Dice and Jaccard satisfy this property while Cosine and both overlap measures do not. Adding a constant vector is called, in Lorenz concentration theory, "nominal increase" and we show that the stronger "transfer principle" is not a required good property for similarity measures. Another good property is that, when we have two vectors and if we add one of these vectors to both vectors, then the similarity must increase. Now Dice, Jaccard, Cosine, and one of the overlap measures satisfy this property, while the other overlap measure does not. Also a variant of this latter property is studied.
    Type
    a
  8. Egghe, L.: ¬A new short proof of Naranan's theorem, explaining Lotka's law and Zipf's law (2010) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 3432) [ClassicSimilarity], result of:
              0.008202582 = score(doc=3432,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 3432, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3432)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Naranan's important theorem, published in Nature in 1970, states that if the number of journals grows exponentially and if the number of articles in each journal grows exponentially (at the same rate for each journal), then the system satisfies Lotka's law and a formula for the Lotka's exponent is given in function of the growth rates of the journals and the articles. This brief communication re-proves this result by showing that the system satisfies Zipf's law, which is equivalent with Lotka's law. The proof is short and algebraic and does not use infinitesimal arguments.
    Type
    a
  9. Egghe, L.; Guns, R.; Rousseau, R.: Thoughts on uncitedness : Nobel laureates and Fields medalists as case studies (2011) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 4994) [ClassicSimilarity], result of:
              0.008118451 = score(doc=4994,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 4994, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4994)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Contrary to what one might expect, Nobel laureates and Fields medalists have a rather large fraction (10% or more) of uncited publications. This is the case for (in total) 75 examined researchers from the fields of mathematics (Fields medalists), physics, chemistry, and physiology or medicine (Nobel laureates). We study several indicators for these researchers, including the h-index, total number of publications, average number of citations per publication, the number (and fraction) of uncited publications, and their interrelations. The most remarkable result is a positive correlation between the h-index and the number of uncited articles. We also present a Lotkaian model, which partially explains the empirically found regularities.
    Type
    a
  10. Egghe, L.; Guns, R.: Applications of the generalized law of Benford to informetric data (2012) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 376) [ClassicSimilarity], result of:
              0.008118451 = score(doc=376,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 376, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=376)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In a previous work (Egghe, 2011), the first author showed that Benford's law (describing the logarithmic distribution of the numbers 1, 2, ... , 9 as first digits of data in decimal form) is related to the classical law of Zipf with exponent 1. The work of Campanario and Coslado (2011), however, shows that Benford's law does not always fit practical data in a statistical sense. In this article, we use a generalization of Benford's law related to the general law of Zipf with exponent ? > 0. Using data from Campanario and Coslado, we apply nonlinear least squares to determine the optimal ? and show that this generalized law of Benford fits the data better than the classical law of Benford.
    Type
    a
  11. Egghe, L.: ¬The Hirsch index and related impact measures (2010) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1597) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1597,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1597)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  12. Egghe, L.; Rousseau, R.: ¬The Hirsch index of a shifted Lotka function and its relation with the impact factor (2012) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 243) [ClassicSimilarity], result of:
              0.00669738 = score(doc=243,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 243, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=243)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Egghe, L.; Bornmann, L.: Fallout and miss in journal peer review (2013) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 1759) [ClassicSimilarity], result of:
              0.00669738 = score(doc=1759,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 1759, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The authors exploit the analogy between journal peer review and information retrieval in order to quantify some imperfections of journal peer review. Design/methodology/approach - The authors define fallout rate and missing rate in order to describe quantitatively the weak papers that were accepted and the strong papers that were missed, respectively. To assess the quality of manuscripts the authors use bibliometric measures. Findings - Fallout rate and missing rate are put in relation with the hitting rate and success rate. Conclusions are drawn on what fraction of weak papers will be accepted in order to have a certain fraction of strong accepted papers. Originality/value - The paper illustrates that these curves are new in peer review research when interpreted in the information retrieval terminology.
    Type
    a
  14. Rousseau, R.; Egghe, L.; Guns, R.: Becoming metric-wise : a bibliometric guide for researchers (2018) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 5226) [ClassicSimilarity], result of:
              0.005858987 = score(doc=5226,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 5226, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aims to inform researchers about metrics so that they become aware of the evaluative techniques being applied to their scientific output. Understanding these concepts will help them during their funding initiatives, and in hiring and tenure. The book not only describes what indicators do (or are designed to do, which is not always the same thing), but also gives precise mathematical formulae so that indicators can be properly understood and evaluated. Metrics have become a critical issue in science, with widespread international discussion taking place on the subject across scientific journals and organizations. As researchers should know the publication-citation context, the mathematical formulae of indicators being used by evaluating committees and their consequences, and how such indicators might be misused, this book provides an ideal tome on the topic. Provides researchers with a detailed understanding of bibliometric indicators and their applications. Empowers researchers looking to understand the indicators relevant to their work and careers. Presents an informed and rounded picture of bibliometrics, including the strengths and shortcomings of particular indicators. Supplies the mathematics behind bibliometric indicators so they can be properly understood. Written by authors with longstanding expertise who are considered global leaders in the field of bibliometrics
  15. Egghe, L.: Influence of adding or deleting items and sources on the h-index (2010) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 3336) [ClassicSimilarity], result of:
              0.005740611 = score(doc=3336,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 3336, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3336)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Adding or deleting items such as self-citations has an influence on the h-index of an author. This influence will be proved mathematically in this article. We hereby prove the experimental finding in E. Gianoli and M.A. Molina-Montenegro ([2009]) that the influence of adding or deleting self-citations on the h-index is greater for low values of the h-index. Why this is logical also is shown by a simple theoretical example. Adding or deleting sources such as adding or deleting minor contributions of an author also has an influence on the h-index of this author; this influence is modeled in this article. This model explains some practical examples found in X. Hu, R. Rousseau, and J. Chen (in press).
    Type
    a
  16. Egghe, L.: Informetric explanation of some Leiden Ranking graphs (2014) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 1236) [ClassicSimilarity], result of:
              0.0054123 = score(doc=1236,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a