Search (2 results, page 1 of 1)

  • × author_ss:"Efthimiadis, E.N."
  1. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.03
    0.033196237 = product of:
      0.08299059 = sum of:
        0.05885388 = weight(_text_:study in 5693) [ClassicSimilarity], result of:
          0.05885388 = score(doc=5693,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.4064256 = fieldWeight in 5693, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0625 = fieldNorm(doc=5693)
        0.02413671 = product of:
          0.04827342 = sum of:
            0.04827342 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
              0.04827342 = score(doc=5693,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.30952093 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The process of term selection for query expansion by end-users is discussed within the context of a study of interactive query expansion in a relevance feedback environment. This user study focuses on how users' perceive and understand term relationships, such as hierarchical and associative relationships, in their searches
    Date
    30. 3.2001 13:35:22
  2. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.02
    0.016438173 = product of:
      0.041095432 = sum of:
        0.026009986 = weight(_text_:study in 5697) [ClassicSimilarity], result of:
          0.026009986 = score(doc=5697,freq=2.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.17961644 = fieldWeight in 5697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
              0.03017089 = score(doc=5697,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5697)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10