Search (57 results, page 1 of 3)

  • × author_ss:"Egghe, L."
  1. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.04
    0.04020165 = product of:
      0.13400549 = sum of:
        0.010881756 = weight(_text_:information in 2558) [ClassicSimilarity], result of:
          0.010881756 = score(doc=2558,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20156369 = fieldWeight in 2558, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.026380861 = weight(_text_:retrieval in 2558) [ClassicSimilarity], result of:
          0.026380861 = score(doc=2558,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.2835858 = fieldWeight in 2558, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2558)
        0.096742876 = sum of:
          0.07174301 = weight(_text_:evaluation in 2558) [ClassicSimilarity], result of:
            0.07174301 = score(doc=2558,freq=8.0), product of:
              0.12900078 = queryWeight, product of:
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.030753274 = queryNorm
              0.556144 = fieldWeight in 2558, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.1947007 = idf(docFreq=1811, maxDocs=44218)
                0.046875 = fieldNorm(doc=2558)
          0.024999864 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
            0.024999864 = score(doc=2558,freq=2.0), product of:
              0.107692726 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.030753274 = queryNorm
              0.23214069 = fieldWeight in 2558, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2558)
      0.3 = coord(3/10)
    
    Abstract
    The paper shows that the present evaluation methods in information retrieval (basically recall R and precision P and in some cases fallout F ) lack universal comparability in the sense that their values depend on the generality of the IR problem. A solution is given by using all "parts" of the database, including the non-relevant documents and also the not-retrieved documents. It turns out that the solution is given by introducing the measure M being the fraction of the not-retrieved documents that are relevant (hence the "miss" measure). We prove that - independent of the IR problem or of the IR action - the quadruple (P,R,F,M) belongs to a universal IR surface, being the same for all IR-activities. This universality is then exploited by defining a new measure for evaluation in IR allowing for unbiased comparisons of all IR results. We also show that only using one, two or even three measures from the set {P,R,F,M} necessary leads to evaluation measures that are non-universal and hence not capable of comparing different IR situations.
    Date
    14. 8.2004 19:17:22
    Source
    Information processing and management. 40(2004) no.1, S.21-30
  2. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.03
    0.02816897 = product of:
      0.09389657 = sum of:
        0.008884916 = weight(_text_:information in 4384) [ClassicSimilarity], result of:
          0.008884916 = score(doc=4384,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.16457605 = fieldWeight in 4384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4384)
        0.059646662 = weight(_text_:ranking in 4384) [ClassicSimilarity], result of:
          0.059646662 = score(doc=4384,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.35857132 = fieldWeight in 4384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=4384)
        0.025364986 = product of:
          0.05072997 = sum of:
            0.05072997 = weight(_text_:evaluation in 4384) [ClassicSimilarity], result of:
              0.05072997 = score(doc=4384,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.3932532 = fieldWeight in 4384, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4384)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. Conseqeuntly, a ranking between countries, universities, research groups or authors, based on one particular accrediting methods does not contain an absolute truth about their relative importance
    Source
    Journal of the American Society for Information Science. 51(2000) no.2, S.145-157
  3. Egghe, L.: Informetric explanation of some Leiden Ranking graphs (2014) 0.02
    0.024169521 = product of:
      0.120847605 = sum of:
        0.00837678 = weight(_text_:information in 1236) [ClassicSimilarity], result of:
          0.00837678 = score(doc=1236,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.1551638 = fieldWeight in 1236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=1236)
        0.11247083 = weight(_text_:ranking in 1236) [ClassicSimilarity], result of:
          0.11247083 = score(doc=1236,freq=4.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.67612857 = fieldWeight in 1236, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0625 = fieldNorm(doc=1236)
      0.2 = coord(2/10)
    
    Abstract
    The S-shaped functional relation between the mean citation score and the proportion of top 10% publications for the 500 Leiden Ranking universities is explained using results of the shifted Lotka function. Also the concave or convex relation between the proportion of top 100?% publications, for different fractions ?, is explained using the obtained new informetric model.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.4, S.737-741
  4. Egghe, L.: Vector retrieval, fuzzy retrieval and the universal fuzzy IR surface for IR evaluation (2004) 0.02
    0.023295905 = product of:
      0.07765301 = sum of:
        0.010365736 = weight(_text_:information in 2531) [ClassicSimilarity], result of:
          0.010365736 = score(doc=2531,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.1920054 = fieldWeight in 2531, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2531)
        0.037694797 = weight(_text_:retrieval in 2531) [ClassicSimilarity], result of:
          0.037694797 = score(doc=2531,freq=6.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.40520695 = fieldWeight in 2531, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2531)
        0.029592482 = product of:
          0.059184965 = sum of:
            0.059184965 = weight(_text_:evaluation in 2531) [ClassicSimilarity], result of:
              0.059184965 = score(doc=2531,freq=4.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.4587954 = fieldWeight in 2531, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2531)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    It is shown that vector information retrieval (IR) and general fuzzy IR uses two types of fuzzy set operations: the original "Zadeh min-max operations" and the so-called "probabilistic sum and algebraic product operations". The universal IR surface, valid for classical 0-1 IR (i.e. where ordinary sets are used) and used in IR evaluation, is extended to and reproved for vector IR, using the probabilistic sum and algebraic product model. We also show (by counterexample) that, using the "Zadeh min-max" fuzzy model, yields a breakdown of this IR surface.
    Source
    Information processing and management. 40(2004) no.4, S.603-618
  5. Egghe, L.; Rousseau, R.; Rousseau, S.: TOP-curves (2007) 0.02
    0.015383491 = product of:
      0.076917455 = sum of:
        0.0073296824 = weight(_text_:information in 50) [ClassicSimilarity], result of:
          0.0073296824 = score(doc=50,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13576832 = fieldWeight in 50, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
        0.069587775 = weight(_text_:ranking in 50) [ClassicSimilarity], result of:
          0.069587775 = score(doc=50,freq=2.0), product of:
            0.16634533 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.030753274 = queryNorm
            0.4183332 = fieldWeight in 50, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0546875 = fieldNorm(doc=50)
      0.2 = coord(2/10)
    
    Abstract
    Several characteristics of classical Lorenz curves make them unsuitable for the study of a group of topperformers. TOP-curves, defined as a kind of mirror image of TIP-curves used in poverty studies, are shown to possess the properties necessary for adequate empirical ranking of various data arrays, based on the properties of the highest performers (i.e., the core). TOP-curves and essential TOP-curves, also introduced in this article, simultaneously represent the incidence, intensity, and inequality among the top. It is shown that TOPdominance partial order, introduced in this article, is stronger than Lorenz dominance order. In this way, this article contributes to the study of cores, a central issue in applied informetrics.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.6, S.777-785
  6. Egghe, L.: Existence theorem of the quadruple (P, R, F, M) : precision, recall, fallout and miss (2007) 0.01
    0.014241478 = product of:
      0.047471594 = sum of:
        0.010881756 = weight(_text_:information in 2011) [ClassicSimilarity], result of:
          0.010881756 = score(doc=2011,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.20156369 = fieldWeight in 2011, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2011)
        0.018654086 = weight(_text_:retrieval in 2011) [ClassicSimilarity], result of:
          0.018654086 = score(doc=2011,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.20052543 = fieldWeight in 2011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2011)
        0.017935753 = product of:
          0.035871506 = sum of:
            0.035871506 = weight(_text_:evaluation in 2011) [ClassicSimilarity], result of:
              0.035871506 = score(doc=2011,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.278072 = fieldWeight in 2011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2011)
          0.5 = coord(1/2)
      0.3 = coord(3/10)
    
    Abstract
    In an earlier paper [Egghe, L. (2004). A universal method of information retrieval evaluation: the "missing" link M and the universal IR surface. Information Processing and Management, 40, 21-30] we showed that, given an IR system, and if P denotes precision, R recall, F fallout and M miss (re-introduced in the paper mentioned above), we have the following relationship between P, R, F and M: P/(1-P)*(1-R)/R*F/(1-F)*(1-M)/M = 1. In this paper we prove the (more difficult) converse: given any four rational numbers in the interval ]0, 1[ satisfying the above equation, then there exists an IR system such that these four numbers (in any order) are the precision, recall, fallout and miss of this IR system. As a consequence we show that any three rational numbers in ]0, 1[ represent any three measures taken from precision, recall, fallout and miss of a certain IR system. We also show that this result is also true for two numbers instead of three.
    Source
    Information processing and management. 43(2007) no.1, S.265-272
  7. Egghe, L.; Rousseau, R.: ¬A theoretical study of recall and precision using a topological approach to information retrieval (1998) 0.01
    0.012850649 = product of:
      0.06425324 = sum of:
        0.014509009 = weight(_text_:information in 3267) [ClassicSimilarity], result of:
          0.014509009 = score(doc=3267,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.2687516 = fieldWeight in 3267, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3267)
        0.04974423 = weight(_text_:retrieval in 3267) [ClassicSimilarity], result of:
          0.04974423 = score(doc=3267,freq=8.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5347345 = fieldWeight in 3267, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3267)
      0.2 = coord(2/10)
    
    Abstract
    Topologies for information retrieval systems are generated by certain subsets, called retrievals. Shows how recall and precision can be expressed using only retrievals. Investigates different types of retrieval systems: both threshold systems and close match systems and both optimal and non optimal retrieval. Highlights the relation with the hypergeometric and some non-standard distributions
    Source
    Information processing and management. 34(1998) nos.2/3, S.191-218
  8. Egghe, L.; Rousseau, R.: Topological aspects of information retrieval (1998) 0.01
    0.012271832 = product of:
      0.061359156 = sum of:
        0.012695382 = weight(_text_:information in 2157) [ClassicSimilarity], result of:
          0.012695382 = score(doc=2157,freq=6.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23515764 = fieldWeight in 2157, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2157)
        0.048663773 = weight(_text_:retrieval in 2157) [ClassicSimilarity], result of:
          0.048663773 = score(doc=2157,freq=10.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.5231199 = fieldWeight in 2157, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2157)
      0.2 = coord(2/10)
    
    Abstract
    Let (DS, DQ, sim) be a retrieval system consisting of a document space DS, a query space QS, and a function sim, expressing the similarity between a document and a query. Following D.M. Everett and S.C. Cater (1992), we introduce topologies on the document space. These topologies are generated by the similarity function sim and the query space QS. 3 topologies will be studied: the retrieval topology, the similarity topology and the (pseudo-)metric one. It is shown that the retrieval topology is the coarsest of the three, while the (pseudo-)metric is the strongest. These 3 topologies are generally different, reflecting distinct topological aspects of information retrieval. We present necessary and sufficient conditions for these topological aspects to be equal
    Source
    Journal of the American Society for Information Science. 49(1998) no.13, S.1144-1160
  9. Egghe, L.: ¬The measures precision, recall, fallout and miss as a function of the number of retrieved documents and their mutual interrelations (2008) 0.01
    0.009840718 = product of:
      0.049203586 = sum of:
        0.005235487 = weight(_text_:information in 2067) [ClassicSimilarity], result of:
          0.005235487 = score(doc=2067,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.09697737 = fieldWeight in 2067, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2067)
        0.0439681 = weight(_text_:retrieval in 2067) [ClassicSimilarity], result of:
          0.0439681 = score(doc=2067,freq=16.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.47264296 = fieldWeight in 2067, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2067)
      0.2 = coord(2/10)
    
    Abstract
    In this paper, for the first time, we present global curves for the measures precision, recall, fallout and miss in function of the number of retrieved documents. Different curves apply for different retrieved systems, for which we give exact definitions in terms of a retrieval density function: perverse retrieval, perfect retrieval, random retrieval, normal retrieval, hereby extending results of Buckland and Gey and of Egghe in the following sense: mathematically more advanced methods yield a better insight into these curves, more types of retrieval are considered and, very importantly, the theory is developed for the "complete" set of measures: precision, recall, fallout and miss. Next we study the interrelationships between precision, recall, fallout and miss in these different types of retrieval, hereby again extending results of Buckland and Gey (incl. a correction) and of Egghe. In the case of normal retrieval we prove that precision in function of recall and recall in function of miss is a concavely decreasing relationship while recall in function of fallout is a concavely increasing relationship. We also show, by producing examples, that the relationships between fallout and precision, miss and precision and miss and fallout are not always convex or concave.
    Source
    Information processing and management. 44(2008) no.2, S.856-876
  10. Egghe, L.; Rousseau, R.: Duality in information retrieval and the hypegeometric distribution (1997) 0.01
    0.009404208 = product of:
      0.04702104 = sum of:
        0.011846555 = weight(_text_:information in 647) [ClassicSimilarity], result of:
          0.011846555 = score(doc=647,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.21943474 = fieldWeight in 647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=647)
        0.03517448 = weight(_text_:retrieval in 647) [ClassicSimilarity], result of:
          0.03517448 = score(doc=647,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.37811437 = fieldWeight in 647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=647)
      0.2 = coord(2/10)
    
    Abstract
    Asserts that duality is an important topic in informetrics, especially in connection with the classical informetric laws. Yet this concept is less studied in information retrieval. It deals with the unification or symmetry between queries and documents, search formulation versus indexing, and relevant versus retrieved documents. Elaborates these ideas and highlights the connection with the hypergeometric distribution
  11. Egghe, L.; Bornmann, L.: Fallout and miss in journal peer review (2013) 0.01
    0.008228681 = product of:
      0.041143406 = sum of:
        0.010365736 = weight(_text_:information in 1759) [ClassicSimilarity], result of:
          0.010365736 = score(doc=1759,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.1920054 = fieldWeight in 1759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1759)
        0.03077767 = weight(_text_:retrieval in 1759) [ClassicSimilarity], result of:
          0.03077767 = score(doc=1759,freq=4.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.33085006 = fieldWeight in 1759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1759)
      0.2 = coord(2/10)
    
    Abstract
    Purpose - The authors exploit the analogy between journal peer review and information retrieval in order to quantify some imperfections of journal peer review. Design/methodology/approach - The authors define fallout rate and missing rate in order to describe quantitatively the weak papers that were accepted and the strong papers that were missed, respectively. To assess the quality of manuscripts the authors use bibliometric measures. Findings - Fallout rate and missing rate are put in relation with the hitting rate and success rate. Conclusions are drawn on what fraction of weak papers will be accepted in order to have a certain fraction of strong accepted papers. Originality/value - The paper illustrates that these curves are new in peer review research when interpreted in the information retrieval terminology.
  12. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.01
    0.0062608393 = product of:
      0.031304196 = sum of:
        0.010470974 = weight(_text_:information in 4992) [ClassicSimilarity], result of:
          0.010470974 = score(doc=4992,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.19395474 = fieldWeight in 4992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=4992)
        0.02083322 = product of:
          0.04166644 = sum of:
            0.04166644 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.04166644 = score(doc=4992,freq=2.0), product of:
                0.107692726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030753274 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.2 = coord(2/10)
    
    Date
    14. 2.2012 12:53:22
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.429
  13. Egghe, L.: Type/Token-Taken informetrics (2003) 0.00
    0.004589834 = product of:
      0.022949168 = sum of:
        0.007404097 = weight(_text_:information in 1608) [ClassicSimilarity], result of:
          0.007404097 = score(doc=1608,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 1608, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
        0.015545071 = weight(_text_:retrieval in 1608) [ClassicSimilarity], result of:
          0.015545071 = score(doc=1608,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.16710453 = fieldWeight in 1608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.2 = coord(2/10)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.603-610
  14. Egghe, L.: Untangling Herdan's law and Heaps' law : mathematical and informetric arguments (2007) 0.00
    0.004589834 = product of:
      0.022949168 = sum of:
        0.007404097 = weight(_text_:information in 271) [ClassicSimilarity], result of:
          0.007404097 = score(doc=271,freq=4.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.13714671 = fieldWeight in 271, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
        0.015545071 = weight(_text_:retrieval in 271) [ClassicSimilarity], result of:
          0.015545071 = score(doc=271,freq=2.0), product of:
            0.093026035 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030753274 = queryNorm
            0.16710453 = fieldWeight in 271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=271)
      0.2 = coord(2/10)
    
    Abstract
    Herdan's law in linguistics and Heaps' law in information retrieval are different formulations of the same phenomenon. Stated briefly and in linguistic terms they state that vocabularies' sizes are concave increasing power laws of texts' sizes. This study investigates these laws from a purely mathematical and informetric point of view. A general informetric argument shows that the problem of proving these laws is, in fact, ill-posed. Using the more general terminology of sources and items, the author shows by presenting exact formulas from Lotkaian informetrics that the total number T of sources is not only a function of the total number A of items, but is also a function of several parameters (e.g., the parameters occurring in Lotka's law). Consequently, it is shown that a fixed T(or A) value can lead to different possible A (respectively, T) values. Limiting the T(A)-variability to increasing samples (e.g., in a text as done in linguistics) the author then shows, in a purely mathematical way, that for large sample sizes T~ A**phi, where phi is a constant, phi < 1 but close to 1, hence roughly, Heaps' or Herdan's law can be proved without using any linguistic or informetric argument. The author also shows that for smaller samples, a is not a constant but essentially decreases as confirmed by practical examples. Finally, an exact informetric argument on random sampling in the items shows that, in most cases, T= T(A) is a concavely increasing function, in accordance with practical examples.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.5, S.702-709
  15. Egghe, L.; Rousseau, R.: ¬A measure for the cohesion of weighted networks (2003) 0.00
    0.0040363893 = product of:
      0.020181946 = sum of:
        0.005235487 = weight(_text_:information in 5157) [ClassicSimilarity], result of:
          0.005235487 = score(doc=5157,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.09697737 = fieldWeight in 5157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5157)
        0.01494646 = product of:
          0.02989292 = sum of:
            0.02989292 = weight(_text_:evaluation in 5157) [ClassicSimilarity], result of:
              0.02989292 = score(doc=5157,freq=2.0), product of:
                0.12900078 = queryWeight, product of:
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.030753274 = queryNorm
                0.23172665 = fieldWeight in 5157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1947007 = idf(docFreq=1811, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5157)
          0.5 = coord(1/2)
      0.2 = coord(2/10)
    
    Abstract
    Measurement of the degree of interconnectedness in graph like networks of hyperlinks or citations can indicate the existence of research fields and assist in comparative evaluation of research efforts. In this issue we begin with Egghe and Rousseau who review compactness measures and investigate the compactness of a network as a weighted graph with dissimilarity values characterizing the arcs between nodes. They make use of a generalization of the Botofogo, Rivlin, Shneiderman, (BRS) compaction measure which treats the distance between unreachable nodes not as infinity but rather as the number of nodes in the network. The dissimilarity values are determined by summing the reciprocals of the weights of the arcs in the shortest chain between two nodes where no weight is smaller than one. The BRS measure is then the maximum value for the sum of the dissimilarity measures less the actual sum divided by the difference between the maximum and minimum. The Wiener index, the sum of all elements in the dissimilarity matrix divided by two, is then computed for Small's particle physics co-citation data as well as the BRS measure, the dissimilarity values and shortest paths. The compactness measure for the weighted network is smaller than for the un-weighted. When the bibliographic coupling network is utilized it is shown to be less compact than the co-citation network which indicates that the new measure produces results that confirm to an obvious case.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.3, S.193-202
  16. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.00
    0.0037565033 = product of:
      0.018782517 = sum of:
        0.0062825847 = weight(_text_:information in 7659) [ClassicSimilarity], result of:
          0.0062825847 = score(doc=7659,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.116372846 = fieldWeight in 7659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.012499932 = product of:
          0.024999864 = sum of:
            0.024999864 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.024999864 = score(doc=7659,freq=2.0), product of:
                0.107692726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030753274 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.2 = coord(2/10)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  17. Egghe, L.; Rousseau, R.: Introduction to informetrics : quantitative methods in library, documentation and information science (1990) 0.00
    0.0016389668 = product of:
      0.016389668 = sum of:
        0.016389668 = weight(_text_:information in 1515) [ClassicSimilarity], result of:
          0.016389668 = score(doc=1515,freq=10.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.3035872 = fieldWeight in 1515, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1515)
      0.1 = coord(1/10)
    
    COMPASS
    Information science / Statistical mathematics
    LCSH
    Information science / Statistical methods
    Subject
    Information science / Statistical mathematics
    Information science / Statistical methods
  18. Egghe, L.: Expansion of the field of informetrics : the second special issue (2006) 0.00
    0.001256517 = product of:
      0.0125651695 = sum of:
        0.0125651695 = weight(_text_:information in 7119) [ClassicSimilarity], result of:
          0.0125651695 = score(doc=7119,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274569 = fieldWeight in 7119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
      0.1 = coord(1/10)
    
    Source
    Information processing and management. 42(2006) no.6, S.1405-1407
  19. Egghe, L.: Expansion of the field of informetrics : origins and consequences (2005) 0.00
    0.001256517 = product of:
      0.0125651695 = sum of:
        0.0125651695 = weight(_text_:information in 1910) [ClassicSimilarity], result of:
          0.0125651695 = score(doc=1910,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274569 = fieldWeight in 1910, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
      0.1 = coord(1/10)
    
    Source
    Information processing and management. 41(2005) no.6, S.1311-1316
  20. Egghe, L.: Special features of the author - publication relationship and a new explanation of Lotka's law based on convolution theory (1994) 0.00
    0.001256517 = product of:
      0.0125651695 = sum of:
        0.0125651695 = weight(_text_:information in 5068) [ClassicSimilarity], result of:
          0.0125651695 = score(doc=5068,freq=2.0), product of:
            0.05398669 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030753274 = queryNorm
            0.23274569 = fieldWeight in 5068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=5068)
      0.1 = coord(1/10)
    
    Source
    Journal of the American Society for Information Science. 45(1994) no.6, S.422-427