Search (6 results, page 1 of 1)

  • × author_ss:"Egghe, L."
  • × language_ss:"e"
  1. Egghe, L.; Rousseau, R.: ¬A measure for the cohesion of weighted networks (2003) 0.04
    0.03733969 = product of:
      0.07467938 = sum of:
        0.07467938 = product of:
          0.14935876 = sum of:
            0.14935876 = weight(_text_:network in 5157) [ClassicSimilarity], result of:
              0.14935876 = score(doc=5157,freq=10.0), product of:
                0.27150726 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.060966637 = queryNorm
                0.5501096 = fieldWeight in 5157, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5157)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Measurement of the degree of interconnectedness in graph like networks of hyperlinks or citations can indicate the existence of research fields and assist in comparative evaluation of research efforts. In this issue we begin with Egghe and Rousseau who review compactness measures and investigate the compactness of a network as a weighted graph with dissimilarity values characterizing the arcs between nodes. They make use of a generalization of the Botofogo, Rivlin, Shneiderman, (BRS) compaction measure which treats the distance between unreachable nodes not as infinity but rather as the number of nodes in the network. The dissimilarity values are determined by summing the reciprocals of the weights of the arcs in the shortest chain between two nodes where no weight is smaller than one. The BRS measure is then the maximum value for the sum of the dissimilarity measures less the actual sum divided by the difference between the maximum and minimum. The Wiener index, the sum of all elements in the dissimilarity matrix divided by two, is then computed for Small's particle physics co-citation data as well as the BRS measure, the dissimilarity values and shortest paths. The compactness measure for the weighted network is smaller than for the un-weighted. When the bibliographic coupling network is utilized it is shown to be less compact than the co-citation network which indicates that the new measure produces results that confirm to an obvious case.
  2. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.02
    0.022425866 = product of:
      0.04485173 = sum of:
        0.04485173 = product of:
          0.13455519 = sum of:
            0.13455519 = weight(_text_:objects in 194) [ClassicSimilarity], result of:
              0.13455519 = score(doc=194,freq=4.0), product of:
                0.32404202 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.060966637 = queryNorm
                0.41523993 = fieldWeight in 194, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=194)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
  3. Egghe, L.; Guns, R.; Rousseau, R.; Leuven, K.U.: Erratum (2012) 0.02
    0.020650344 = product of:
      0.041300688 = sum of:
        0.041300688 = product of:
          0.082601376 = sum of:
            0.082601376 = weight(_text_:22 in 4992) [ClassicSimilarity], result of:
              0.082601376 = score(doc=4992,freq=2.0), product of:
                0.21349478 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.060966637 = queryNorm
                0.38690117 = fieldWeight in 4992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4992)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 2.2012 12:53:22
  4. Egghe, L.: Type/Token-Taken informetrics (2003) 0.02
    0.015857482 = product of:
      0.031714965 = sum of:
        0.031714965 = product of:
          0.09514489 = sum of:
            0.09514489 = weight(_text_:objects in 1608) [ClassicSimilarity], result of:
              0.09514489 = score(doc=1608,freq=2.0), product of:
                0.32404202 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.060966637 = queryNorm
                0.29361898 = fieldWeight in 1608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
  5. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.01
    0.012390207 = product of:
      0.024780413 = sum of:
        0.024780413 = product of:
          0.049560826 = sum of:
            0.049560826 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.049560826 = score(doc=7659,freq=2.0), product of:
                0.21349478 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.060966637 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  6. Egghe, L.: ¬A universal method of information retrieval evaluation : the "missing" link M and the universal IR surface (2004) 0.01
    0.012390207 = product of:
      0.024780413 = sum of:
        0.024780413 = product of:
          0.049560826 = sum of:
            0.049560826 = weight(_text_:22 in 2558) [ClassicSimilarity], result of:
              0.049560826 = score(doc=2558,freq=2.0), product of:
                0.21349478 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.060966637 = queryNorm
                0.23214069 = fieldWeight in 2558, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2558)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 19:17:22