Search (23 results, page 1 of 2)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  • × year_i:[1990 TO 2000}
  1. Poynder, R.: Web research engines? (1996) 0.12
    0.12309804 = product of:
      0.18464705 = sum of:
        0.0986154 = weight(_text_:search in 5698) [ClassicSimilarity], result of:
          0.0986154 = score(doc=5698,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5643796 = fieldWeight in 5698, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
        0.08603165 = product of:
          0.1720633 = sum of:
            0.1720633 = weight(_text_:engines in 5698) [ClassicSimilarity], result of:
              0.1720633 = score(doc=5698,freq=8.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.67362815 = fieldWeight in 5698, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5698)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  2. Schwartz, C.: Web search engines (1998) 0.11
    0.10629931 = product of:
      0.15944897 = sum of:
        0.0986154 = weight(_text_:search in 5700) [ClassicSimilarity], result of:
          0.0986154 = score(doc=5700,freq=12.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.5643796 = fieldWeight in 5700, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5700)
        0.060833566 = product of:
          0.12166713 = sum of:
            0.12166713 = weight(_text_:engines in 5700) [ClassicSimilarity], result of:
              0.12166713 = score(doc=5700,freq=4.0), product of:
                0.25542772 = queryWeight, product of:
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.05027291 = queryNorm
                0.47632706 = fieldWeight in 5700, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.080822 = idf(docFreq=746, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5700)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This reviews looks briefly at the history of WWW search engine development, considers the current state of affairs, and reflects on the future. Networked discovery tools have evolved along with Internet resource availability. WWW search engines display some complexity in their variety, content, resource acquisition strategies, and in the array of tools the deploy to assist users. A small but growing body of evaluation literature, much of it not systematic in nature, indicates that performance effectiveness is difficult to assess in this setting. Significant improvements in general-content search engine retrieval and ranking performance may not be possible, and are probalby not worth the effort, although search engine providers have introduced some rudimentary attempts at personalization, summarization, and query expansion. The shift to distributed search across multitype database systems could extend general networked discovery and retrieval to include smaller resource collections with rich metadata and navigation tools
  3. Fieldhouse, M.; Hancock-Beaulieu, M.: ¬The design of a graphical user interface for a highly interactive information retrieval system (1996) 0.06
    0.060176264 = product of:
      0.090264395 = sum of:
        0.0664249 = weight(_text_:search in 6958) [ClassicSimilarity], result of:
          0.0664249 = score(doc=6958,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.38015217 = fieldWeight in 6958, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6958)
        0.0238395 = product of:
          0.047679 = sum of:
            0.047679 = weight(_text_:22 in 6958) [ClassicSimilarity], result of:
              0.047679 = score(doc=6958,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2708308 = fieldWeight in 6958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6958)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports on the design of a GUI for the Okapi 'best match' retrieval system developed at the Centre for Interactive Systems Research, City University, UK, for online library catalogues. The X-Windows interface includes an interactive query expansion (IQE) facilty which involves the user in the selection of query terms to reformulate a search. Presents the design rationale, based on a game board metaphor, and describes the features of each of the stages of the search interaction. Reports on the early operational field trial and discusses relevant evaluation issues and objectives
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.05
    0.047206 = product of:
      0.070809 = sum of:
        0.0469695 = weight(_text_:search in 1319) [ClassicSimilarity], result of:
          0.0469695 = score(doc=1319,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 1319, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.0238395 = product of:
          0.047679 = sum of:
            0.047679 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.047679 = score(doc=1319,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
  5. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.03
    0.03371857 = product of:
      0.050577857 = sum of:
        0.03354964 = weight(_text_:search in 5697) [ClassicSimilarity], result of:
          0.03354964 = score(doc=5697,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 5697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.017028214 = product of:
          0.03405643 = sum of:
            0.03405643 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
              0.03405643 = score(doc=5697,freq=2.0), product of:
                0.17604718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05027291 = queryNorm
                0.19345059 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5697)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
  6. Hancock-Beaulieu, M.: Query expansion : advances in research in online catalogues (1992) 0.03
    0.031630907 = product of:
      0.09489272 = sum of:
        0.09489272 = weight(_text_:search in 2351) [ClassicSimilarity], result of:
          0.09489272 = score(doc=2351,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.54307455 = fieldWeight in 2351, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.078125 = fieldNorm(doc=2351)
      0.33333334 = coord(1/3)
    
    Abstract
    Query expansion is the process of supplementing or replacing the original query terms with additional terms either at the search formulation or search reformulation stages. Different approaches to implementing query expansion are considered in three online catalogs
  7. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.03
    0.026839714 = product of:
      0.08051914 = sum of:
        0.08051914 = weight(_text_:search in 5202) [ClassicSimilarity], result of:
          0.08051914 = score(doc=5202,freq=8.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.460814 = fieldWeight in 5202, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
  8. Oakes, M.P.; Taylor, M.J.: Automated assistance in the formulation of search statements for bibliographic databases (1998) 0.03
    0.026839714 = product of:
      0.08051914 = sum of:
        0.08051914 = weight(_text_:search in 6419) [ClassicSimilarity], result of:
          0.08051914 = score(doc=6419,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.460814 = fieldWeight in 6419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.09375 = fieldNorm(doc=6419)
      0.33333334 = coord(1/3)
    
  9. Efthimiadis, E.N.: Approaches to search formulation and query expansion in information systems : DRS, DBMS, ES (1992) 0.02
    0.022366427 = product of:
      0.06709928 = sum of:
        0.06709928 = weight(_text_:search in 3871) [ClassicSimilarity], result of:
          0.06709928 = score(doc=3871,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.3840117 = fieldWeight in 3871, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.078125 = fieldNorm(doc=3871)
      0.33333334 = coord(1/3)
    
  10. Hancock-Beaulieu, M.; Fieldhouse, M.; Do, T.: ¬A graphical interface for OKAPI : the design and evaluation of an online catalogue system with direct manipulation interaction for subject access (1994) 0.02
    0.022141634 = product of:
      0.0664249 = sum of:
        0.0664249 = weight(_text_:search in 1318) [ClassicSimilarity], result of:
          0.0664249 = score(doc=1318,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.38015217 = fieldWeight in 1318, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1318)
      0.33333334 = coord(1/3)
    
    Abstract
    A project to design a graphical user interface for the OKAPI online catalogue search system which uses the basic term weighting probabilistic search engine. Presents a research context of the project with a discussion of interface and functionality issues relating to the design of OPACs. Describes the design methodology and evaluation methodology. Presents the preliminary results of the field trial evaluation. Considers problems encountered in the field trial and discusses contributory factors to the effectiveness of interactive query expansion. Highlights the tension between usability and functionality in highly interactive retrieval and suggests further areas of research
  11. Hancock-Beaulieu, M.; Walker, S.: ¬An evaluation of automatic query expansion in an online library catalogue (1992) 0.02
    0.022141634 = product of:
      0.0664249 = sum of:
        0.0664249 = weight(_text_:search in 2731) [ClassicSimilarity], result of:
          0.0664249 = score(doc=2731,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.38015217 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2731)
      0.33333334 = coord(1/3)
    
    Abstract
    An automatic query expansion (AQE) facility in anonline catalogue was evaluated in an operational library setting. The OKAPI experimental system had other features including: ranked output 'best match' keyword searching, automatic stemming, spelling normalisation and cross referencing as well as relevance feedback. A combination of transaction log analysis, search replays, questionnaires and interviews was used for data collection. Findings show that contrary to previous results, AQE was beneficial in a substantial number of searches. Use intentions, the effectiveness of the 'best match' search and user interaction were identified as the main factors affecting the take-up of the query expansion facility
  12. Robertson, S.E.: OKAPI at TREC-3 (1995) 0.02
    0.022141634 = product of:
      0.0664249 = sum of:
        0.0664249 = weight(_text_:search in 5694) [ClassicSimilarity], result of:
          0.0664249 = score(doc=5694,freq=4.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.38015217 = fieldWeight in 5694, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5694)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports text information retrieval experiments performed as part of the 3 rd round of Text Retrieval Conferences (TREC) using the Okapi online catalogue system at City University, UK. The emphasis in TREC-3 was: further refinement of term weighting functions; an investigation of run time passage determination and searching; expansion of ad hoc queries by terms extracted from the top documents retrieved by a trial search; new methods for choosing query expansion terms after relevance feedback, now split into methods of ranking terms prior to selection and subsequent selection procedures; and the development of a user interface procedure within the new TREC interactive search framework
  13. Hancock-Beaulieu, M.: Evaluating the impact of an online library catalogue on subject searching behaviour at the catalogue and at the shelves (1990) 0.02
    0.019369897 = product of:
      0.058109686 = sum of:
        0.058109686 = weight(_text_:search in 5691) [ClassicSimilarity], result of:
          0.058109686 = score(doc=5691,freq=6.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.33256388 = fieldWeight in 5691, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5691)
      0.33333334 = coord(1/3)
    
    Abstract
    The second half of a 'before and after' study to evaluate the impact of an online catalogue on subject searching behaviour is reported. A holistic approach is adopted encompassing both catalogue use and browsing at the shelves for catalogue users and non-users. Verbal and non-verbal data were elicited from searchers using a combined methodology including talk-aloud technique, observation and a screen logging facility. An extensive qualitative analysis was carried out correlating expressed topics, search formulation strategies and documents retrieved at the shelves. The online catalogue environment does not appear to have increased the extent of subject searching nor the use of the bibliographic tool. The manual PRECIS index supported a contextual approach for broad and more interactive search formulations whereas the OPAC encouraged a matching approach and narrow formulations with fewer but user generated formulations. The success rate of the online catalogue was slightly better than that of the manual tools but fewer items were retrieved at the shelves. Non-users of the bibliographic tools seemed to be just as successful. To improve retrieval effectiveness it is suggested that online catalogues should cater for both matching and contextual approaches to searching. Recent research indicates that a more interactive process could be promoted by providing query expansion through a combination of searching aids for matching, for search formulation assistance and for structured contextual retrieval
  14. Srinivasan, P.: Query expansion and MEDLINE (1996) 0.02
    0.017893143 = product of:
      0.053679425 = sum of:
        0.053679425 = weight(_text_:search in 8453) [ClassicSimilarity], result of:
          0.053679425 = score(doc=8453,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.30720934 = fieldWeight in 8453, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=8453)
      0.33333334 = coord(1/3)
    
    Abstract
    Evaluates the retrieval effectiveness of query expansion strategies on a test collection of the medical database MEDLINE using Cornell University's SMART retrieval system. Tests 3 expansion strategies for their ability to identify appropriate MeSH terms for user queries. Compares retrieval effectiveness using the original unexpanded and the alternative expanded user queries on a collection of 75 queries and 2.334 Medline citations. Recommends query expansions using retrieval feedback for adding MeSH search terms to a user's initial query
  15. Hancock-Beaulieu, M.; Fieldhouse, M.; Do, T.: ¬An evaluation of interactive query expansion in an online library catalogue with a graphical user interface (1995) 0.02
    0.015656501 = product of:
      0.0469695 = sum of:
        0.0469695 = weight(_text_:search in 1666) [ClassicSimilarity], result of:
          0.0469695 = score(doc=1666,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 1666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1666)
      0.33333334 = coord(1/3)
    
    Abstract
    An online library catalogue served as a testbed to evaluate an interactive query expansion facility based on relevance feedback for the Okapi probabilistic term weighting retrieval system. The facility was implemented in a graphical user interface (GUI) environment using a game-board metaphor for the search process, and allowed searchers to select candidate terms extracted from relevant retrieved itms to reformulate queries. The take-up of the interactive query expansion option was found to be lower, and its retrieval performance less effective, compared to previous tests featuring automatic query expansion. Contributory factors including the number, presentation and source of terms are discussed
  16. Beaulieu, M.: Experiments on interfaces to support query expansion (1997) 0.02
    0.015656501 = product of:
      0.0469695 = sum of:
        0.0469695 = weight(_text_:search in 4704) [ClassicSimilarity], result of:
          0.0469695 = score(doc=4704,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 4704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4704)
      0.33333334 = coord(1/3)
    
    Abstract
    Focuses on the user and human-computer interaction (HCI) aspects of the research based on the Okapi text retrieval system. Describes 3 experiments using different approaches to query expansion, highlighting the relationship between the functionality of a system and different interface designs. These experiments involve both automatic and interactive query expansion, and both character based and GUI (graphical user interface) environments. The effectiveness of the search interaction for query expansion depends on resolving opposing interface and functional aspects, e.g. automatic vs. interactive query expansion, explicit vs. implicit use of a thesaurus, and document vs. query space
  17. Hemmje, M.: LyberWorld - a 3D graphical user interface for fulltext retrieval (1995) 0.02
    0.015656501 = product of:
      0.0469695 = sum of:
        0.0469695 = weight(_text_:search in 2385) [ClassicSimilarity], result of:
          0.0469695 = score(doc=2385,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.2688082 = fieldWeight in 2385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2385)
      0.33333334 = coord(1/3)
    
    Abstract
    LyberWorld is a prototype IR user interface. It implements visualizations of an abstract information space: fulltext. The video demonstrates a visual user interface for the probabilistic fulltext retrieval system INQUERY. Visualizations are used to communicate information search and browsing activities in a natural way by applying metaphors of spatial navigation in abstract information spaces. Visualization tools for exploring information spaces and judging relevance of information items are introduced and an example session demonstrates the prototype. The presence of a spatial model in the user's mind is regarded as an essential contribution towards natural interaction and reduction of cognitive costs during retrieval dialogues.
  18. Hemmje, M.; Kunkel, C.; Willett, A.: LyberWorld - a visualization user interface supporting fulltext retrieval (1994) 0.01
    0.013419857 = product of:
      0.04025957 = sum of:
        0.04025957 = weight(_text_:search in 2384) [ClassicSimilarity], result of:
          0.04025957 = score(doc=2384,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.230407 = fieldWeight in 2384, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=2384)
      0.33333334 = coord(1/3)
    
    Abstract
    LyberWorld is a prototype IR user interface. It implements visualizations of an abstract information space-fulltext. The paper derives a model for such visualizations and an exemplar user interface design is implemented for the probabilistic fulltext retrieval system INQUERY. Visualizations are used to communicate information search and browsing activities in a natural way by applying metaphors of spatial navigation in abstract information spaces. Visualization tools for exploring information spaces and judging relevance of information items are introduced and an example session demonstrates the prototype. The presence of a spatial model in the user's mind and interaction with a system's corresponding display methods is regarded as an essential contribution towards natural interaction and reduction of cognitive costs during e.g. query construction, orientation within the database content, relevance judgement and orientation within the retrieval context.
  19. Tseng, Y.-H.: Solving vocabulary problems with interactive query expansion (1998) 0.01
    0.011183213 = product of:
      0.03354964 = sum of:
        0.03354964 = weight(_text_:search in 5159) [ClassicSimilarity], result of:
          0.03354964 = score(doc=5159,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 5159, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5159)
      0.33333334 = coord(1/3)
    
    Abstract
    One of the major causes of search failures in information retrieval systems is vocabulary mismatch. Presents a solution to the vocabulary problem through 2 strategies known as term suggestion (TS) and term relevance feedback (TRF). In TS, collection specific terms are extracted from the text collection. These terms and their frequencies constitute the keyword database for suggesting terms in response to users' queries. One effect of this term suggestion is that it functions as a dynamic directory if the query is a general term that contains broad meaning. In term relevance feedback, terms extracted from the top ranked documents retrieved from the previous query are shown to users for relevance feedback. In the experiment, interactive TS provides very high precision rates while achieving similar recall rates as n-gram matching. Local TRF achieves improvement in both precision and recall rate in a full text news database and degrades slightly in recall rate in bibliographic databases due to the very limited source of information for feedback. In terms of Rijsbergen's combined measure of recall and precision, both TS and TRF achieve better performance than n-gram matching, which implies that the greater improvement in precision rate compensates the slight degradation in recall rate for TS and TRF
  20. Nagao, M.: Knowledge and inference (1990) 0.01
    0.011183213 = product of:
      0.03354964 = sum of:
        0.03354964 = weight(_text_:search in 3304) [ClassicSimilarity], result of:
          0.03354964 = score(doc=3304,freq=2.0), product of:
            0.1747324 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.05027291 = queryNorm
            0.19200584 = fieldWeight in 3304, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3304)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.