Search (6 results, page 1 of 1)

  • × author_ss:"Egghe, L."
  1. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.01
    0.006668387 = product of:
      0.040010322 = sum of:
        0.040010322 = product of:
          0.080020644 = sum of:
            0.080020644 = weight(_text_:etc in 194) [ClassicSimilarity], result of:
              0.080020644 = score(doc=194,freq=4.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.4231634 = fieldWeight in 194, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=194)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
  2. Egghe, L.; Rousseau, R.; Hooydonk, G. van: Methods for accrediting publications to authors or countries : consequences for evaluation studies (2000) 0.01
    0.0056583136 = product of:
      0.03394988 = sum of:
        0.03394988 = product of:
          0.06789976 = sum of:
            0.06789976 = weight(_text_:etc in 4384) [ClassicSimilarity], result of:
              0.06789976 = score(doc=4384,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.35906604 = fieldWeight in 4384, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4384)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. Conseqeuntly, a ranking between countries, universities, research groups or authors, based on one particular accrediting methods does not contain an absolute truth about their relative importance
  3. Egghe, L.: Type/Token-Taken informetrics (2003) 0.00
    0.004715261 = product of:
      0.028291566 = sum of:
        0.028291566 = product of:
          0.056583133 = sum of:
            0.056583133 = weight(_text_:etc in 1608) [ClassicSimilarity], result of:
              0.056583133 = score(doc=1608,freq=2.0), product of:
                0.18910104 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.03491209 = queryNorm
                0.2992217 = fieldWeight in 1608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
  4. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.00
    0.0039417557 = product of:
      0.023650533 = sum of:
        0.023650533 = product of:
          0.047301065 = sum of:
            0.047301065 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.047301065 = score(doc=4992,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    14. 2.2012 12:53:22
  5. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.00
    0.0023650532 = product of:
      0.014190319 = sum of:
        0.014190319 = product of:
          0.028380638 = sum of:
            0.028380638 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.028380638 = score(doc=7659,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  6. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.00
    0.0023650532 = product of:
      0.014190319 = sum of:
        0.014190319 = product of:
          0.028380638 = sum of:
            0.028380638 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.028380638 = score(doc=2558,freq=2.0), product of:
                0.1222562 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03491209 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    14. 8.2004 19:17:22