Search (6 results, page 1 of 1)

  • × author_ss:"Efthimiadis, E.N."
  1. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.13
    0.12663198 = product of:
      0.18994795 = sum of:
        0.16320182 = weight(_text_:query in 5693) [ClassicSimilarity], result of:
          0.16320182 = score(doc=5693,freq=6.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.71152055 = fieldWeight in 5693, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=5693)
        0.026746122 = product of:
          0.053492244 = sum of:
            0.053492244 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
              0.053492244 = score(doc=5693,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.30952093 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The process of term selection for query expansion by end-users is discussed within the context of a study of interactive query expansion in a relevance feedback environment. This user study focuses on how users' perceive and understand term relationships, such as hierarchical and associative relationships, in their searches
    Date
    30. 3.2001 13:35:22
  2. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.11
    0.10731181 = product of:
      0.16096771 = sum of:
        0.14425138 = weight(_text_:query in 5697) [ClassicSimilarity], result of:
          0.14425138 = score(doc=5697,freq=12.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.6289012 = fieldWeight in 5697, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.016716326 = product of:
          0.03343265 = sum of:
            0.03343265 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
              0.03343265 = score(doc=5697,freq=2.0), product of:
                0.1728227 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049352113 = queryNorm
                0.19345059 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5697)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
  3. Efthimiadis, E.N.: Query expansion (1996) 0.08
    0.08309829 = product of:
      0.24929488 = sum of:
        0.24929488 = weight(_text_:query in 4847) [ClassicSimilarity], result of:
          0.24929488 = score(doc=4847,freq=14.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            1.0868655 = fieldWeight in 4847, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0625 = fieldNorm(doc=4847)
      0.33333334 = coord(1/3)
    
    Abstract
    State of the art review of query expansion (or term expansion) as the process of supplementing the original query with additional terms in order to improve retrieval performance. Research in the subject is presented in a highly structured way and is presented according to 3 types of query expansion; manual query expansion; automatic query expansion; and interactive query expansion
  4. Efthimiadis, E.N.: Interactive query expansion : a user-based evaluation in a relevance feedback environment (2000) 0.07
    0.06662686 = product of:
      0.19988059 = sum of:
        0.19988059 = weight(_text_:query in 5701) [ClassicSimilarity], result of:
          0.19988059 = score(doc=5701,freq=36.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.8714311 = fieldWeight in 5701, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.03125 = fieldNorm(doc=5701)
      0.33333334 = coord(1/3)
    
    Abstract
    A user-centered investigation of interactive query expansion within the context of a relevance feedback system is presented in this article. Data were collected from 25 searches using the INSPEC database. The data collection mechanisms included questionnaires, transaction logs, and relevance evaluations. The results discuss issues that relate to query expansion, retrieval effectiveness, the correspondence of the on-line-to-off-line relevance judgments, and the selection of terms for query expansion by users (interactive query expansion). The main conclusions drawn from the results of the study are that: (1) one-third of the terms presented to users in a list of candidate terms for query expansion was identified by the users as potentially useful for query expansion. (2) These terms were mainly judged as either variant expressions (synonyms) or alternative (related) terms to the initial query terms. However, a substantial portion of the selected terms were identified as representing new ideas. (3) The relationships identified between the five best terms selected by the users for query expansion and the initial query terms were that: (a) 34% of the query expansion terms have no relationship or other type of correspondence with a query term; (b) 66% of the remaining query expansion terms have a relationship to the query terms. These relationships were: narrower term (46%), broader term (3%), related term (17%). (4) The results provide evidence for the effectiveness of interactive query expansion. The initial search produced on average three highly relevant documents; the query expansion search produced on average nine further highly relevant documents. The conclusions highlight the need for more research on: interactive query expansion, the comparative evaluation of automatic vs. interactive query expansion, the study of weighted Webbased or Web-accessible retrieval systems in operational environments, and for user studies in searching ranked retrieval systems in general
  5. Efthimiadis, E.N.: Approaches to search formulation and query expansion in information systems : DRS, DBMS, ES (1992) 0.06
    0.055522382 = product of:
      0.16656715 = sum of:
        0.16656715 = weight(_text_:query in 3871) [ClassicSimilarity], result of:
          0.16656715 = score(doc=3871,freq=4.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.7261926 = fieldWeight in 3871, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.078125 = fieldNorm(doc=3871)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses the ways in which systems and/or users formulate and reformulate searches in documents retrieval systems (DRS), database management systems (DBMS) and expert systems (ES). Concludes that query formulation and reformulation has been neglected in these fields
  6. Fidel, R.; Efthimiadis, E.N.: Terminological knowledge structure for intermediary expert systems (1995) 0.02
    0.023556154 = product of:
      0.07066846 = sum of:
        0.07066846 = weight(_text_:query in 5695) [ClassicSimilarity], result of:
          0.07066846 = score(doc=5695,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.30809742 = fieldWeight in 5695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=5695)
      0.33333334 = coord(1/3)
    
    Abstract
    To provide advice for online searching about term selection and query expansion, an intermediary expert system should indicate a terminological knowledge structure. Terminological attributes could provide the foundation of a knowledge base, and knowledge acquisition could rely on knowledge base techniques coupled with statistical techniques. The strategies of expert searchers would provide 1 source of knowledge. The knowledge structure would include 3 constructs for each term: frequency data, a hedge, and a position in a classification scheme. Switching vocabularies could provide a meta-scheme and facilitate the interoperability of databases in similar subjects. To develop such knowledge structure, research should focus on terminological attributes, word and phrase disambiguation, automated text processing, and the role of thesauri and classification schemes in indexing and retrieval. It should develop techniques that combine knowledge base and statistical methods and that consider user preferences