Search (24 results, page 1 of 2)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  • × year_i:[1990 TO 2000}
  1. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.01
    0.014141314 = product of:
      0.056565255 = sum of:
        0.041774936 = weight(_text_:data in 2230) [ClassicSimilarity], result of:
          0.041774936 = score(doc=2230,freq=6.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.3630661 = fieldWeight in 2230, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.014790321 = product of:
          0.029580642 = sum of:
            0.029580642 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.029580642 = score(doc=2230,freq=2.0), product of:
                0.12742549 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03638826 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  2. Fidel, R.; Efthimiadis, E.N.: Terminological knowledge structure for intermediary expert systems (1995) 0.01
    0.013017729 = product of:
      0.052070916 = sum of:
        0.02411877 = weight(_text_:data in 5695) [ClassicSimilarity], result of:
          0.02411877 = score(doc=5695,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.2096163 = fieldWeight in 5695, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5695)
        0.027952146 = product of:
          0.05590429 = sum of:
            0.05590429 = weight(_text_:processing in 5695) [ClassicSimilarity], result of:
              0.05590429 = score(doc=5695,freq=4.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.3795138 = fieldWeight in 5695, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5695)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    To provide advice for online searching about term selection and query expansion, an intermediary expert system should indicate a terminological knowledge structure. Terminological attributes could provide the foundation of a knowledge base, and knowledge acquisition could rely on knowledge base techniques coupled with statistical techniques. The strategies of expert searchers would provide 1 source of knowledge. The knowledge structure would include 3 constructs for each term: frequency data, a hedge, and a position in a classification scheme. Switching vocabularies could provide a meta-scheme and facilitate the interoperability of databases in similar subjects. To develop such knowledge structure, research should focus on terminological attributes, word and phrase disambiguation, automated text processing, and the role of thesauri and classification schemes in indexing and retrieval. It should develop techniques that combine knowledge base and statistical methods and that consider user preferences
    Source
    Information processing and management. 31(1995) no.1, S.15-27
  3. Robertson, S.E.; Walker, S.; Hancock-Beaulieu, M.M.: Large test collection experiments of an operational, interactive system : OKAPI at TREC (1995) 0.01
    0.012799477 = product of:
      0.05119791 = sum of:
        0.028138565 = weight(_text_:data in 6964) [ClassicSimilarity], result of:
          0.028138565 = score(doc=6964,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.24455236 = fieldWeight in 6964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6964)
        0.023059342 = product of:
          0.046118684 = sum of:
            0.046118684 = weight(_text_:processing in 6964) [ClassicSimilarity], result of:
              0.046118684 = score(doc=6964,freq=2.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.3130829 = fieldWeight in 6964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6964)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The Okapi system has been used in a series of experiments on the TREC collections, investiganting probabilistic methods, relevance feedback, and query expansion, and interaction issues. Some new probabilistic models have been developed, resulting in simple weigthing functions that take account of document length and within document and within query term frequency. All have been shown to be beneficial when based on large quantities of relevance data as in the routing task. Interaction issues are much more difficult to evaluate in the TREC framework, and no benefits have yet been demonstrated from feedback based on small numbers of 'relevant' items identified by intermediary searchers
    Source
    Information processing and management. 31(1995) no.3, S.345-360
  4. Berry, M.W.; Dumais, S.T.; O'Brien, G.W.: Using linear algebra for intelligent information retrieval (1995) 0.01
    0.008319592 = product of:
      0.06655674 = sum of:
        0.06655674 = weight(_text_:higher in 2206) [ClassicSimilarity], result of:
          0.06655674 = score(doc=2206,freq=2.0), product of:
            0.19113865 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.03638826 = queryNorm
            0.34821182 = fieldWeight in 2206, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.046875 = fieldNorm(doc=2206)
      0.125 = coord(1/8)
    
    Abstract
    Currently, most approaches to retrieving textual materials from scientific databases depend on a lexical match between words in users' requests and those in or assigned to documents in a database. Because of the tremendous diversity in the words people use to describe the same document, lexical methods are necessarily incomplete and imprecise. Using the singular value decomposition (SVD), one can take advantage of the implicit higher-order structure in the association of terms with documents by determining the SVD of large sparse term by document matrices. Terms and documents represented by 200-300 of the largest singular vectors are then matched against user queries. We call this retrieval method Latent Semantic Indexing (LSI) because the subspace represents important associative relationships between terms and documents that are not evident in individual documents. LSI is a completely automatic yet intelligent indexing method, widely applicable, and a promising way to improve users...
  5. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.01
    0.007199057 = product of:
      0.057592455 = sum of:
        0.057592455 = sum of:
          0.03294192 = weight(_text_:processing in 5697) [ClassicSimilarity], result of:
            0.03294192 = score(doc=5697,freq=2.0), product of:
              0.14730503 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.03638826 = queryNorm
              0.22363065 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
          0.024650536 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
            0.024650536 = score(doc=5697,freq=2.0), product of:
              0.12742549 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03638826 = queryNorm
              0.19345059 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
      0.125 = coord(1/8)
    
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.605-620
  6. Jarvelin, K.: ¬A deductive data model for thesaurus navigation and query expansion (1996) 0.01
    0.0069624893 = product of:
      0.055699915 = sum of:
        0.055699915 = weight(_text_:data in 5625) [ClassicSimilarity], result of:
          0.055699915 = score(doc=5625,freq=6.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.48408815 = fieldWeight in 5625, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5625)
      0.125 = coord(1/8)
    
    Abstract
    Describes a deductive data model based on 3 abstraction levels for representing vocabularies for information retrieval: conceptual level; expression level; and occurrence level. The proposed data model can be used for the representation and navigation of indexing and retrieval thesauri and as a vocabulary source for concept based query expansion in heterogeneous retrieval environments
  7. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.01
    0.005684849 = product of:
      0.04547879 = sum of:
        0.04547879 = weight(_text_:data in 5689) [ClassicSimilarity], result of:
          0.04547879 = score(doc=5689,freq=4.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.3952563 = fieldWeight in 5689, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
      0.125 = coord(1/8)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
  8. Järvelin, K.; Niemi, T.: Deductive information retrieval based on classifications (1993) 0.01
    0.005221867 = product of:
      0.041774936 = sum of:
        0.041774936 = weight(_text_:data in 2229) [ClassicSimilarity], result of:
          0.041774936 = score(doc=2229,freq=6.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.3630661 = fieldWeight in 2229, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2229)
      0.125 = coord(1/8)
    
    Abstract
    Modern fact databses contain abundant data classified through several classifications. Typically, users msut consult these classifications in separate manuals or files, thus making their effective use difficult. Contemporary database systems do little support deductive use of classifications. In this study we show how deductive data management techniques can be applied to the utilization of data value classifications. Computation of transitive class relationships is of primary importance here. We define a representation of classifications which supports transitive computation and present an operation-oriented deductive query language tailored for classification-based deductive information retrieval. The operations of this language are on the same abstraction level as relational algebra operations and can be integrated with these to form a powerful and flexible query language for deductive information retrieval. We define the integration of these operations and demonstrate the usefulness of the language in terms of several sample queries
  9. Talja, S.; Keso, H.; Pietilainen, T.: ¬The production of context in information seeking research : a metatheoretical view (1999) 0.00
    0.0049412875 = product of:
      0.0395303 = sum of:
        0.0395303 = product of:
          0.0790606 = sum of:
            0.0790606 = weight(_text_:processing in 6249) [ClassicSimilarity], result of:
              0.0790606 = score(doc=6249,freq=2.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.53671354 = fieldWeight in 6249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6249)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Information processing and management. 35(1999) no.6, S.751-763
  10. Oakes, M.P.; Taylor, M.J.: Automated assistance in the formulation of search statements for bibliographic databases (1998) 0.00
    0.0049412875 = product of:
      0.0395303 = sum of:
        0.0395303 = product of:
          0.0790606 = sum of:
            0.0790606 = weight(_text_:processing in 6419) [ClassicSimilarity], result of:
              0.0790606 = score(doc=6419,freq=2.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.53671354 = fieldWeight in 6419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6419)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Information processing and management. 34(1998) no.6, S.645-668
  11. Hancock-Beaulieu, M.; Walker, S.: ¬An evaluation of automatic query expansion in an online library catalogue (1992) 0.00
    0.0035173206 = product of:
      0.028138565 = sum of:
        0.028138565 = weight(_text_:data in 2731) [ClassicSimilarity], result of:
          0.028138565 = score(doc=2731,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.24455236 = fieldWeight in 2731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2731)
      0.125 = coord(1/8)
    
    Abstract
    An automatic query expansion (AQE) facility in anonline catalogue was evaluated in an operational library setting. The OKAPI experimental system had other features including: ranked output 'best match' keyword searching, automatic stemming, spelling normalisation and cross referencing as well as relevance feedback. A combination of transaction log analysis, search replays, questionnaires and interviews was used for data collection. Findings show that contrary to previous results, AQE was beneficial in a substantial number of searches. Use intentions, the effectiveness of the 'best match' search and user interaction were identified as the main factors affecting the take-up of the query expansion facility
  12. Srinivasan, P.: Query expansion and MEDLINE (1996) 0.00
    0.0032941918 = product of:
      0.026353534 = sum of:
        0.026353534 = product of:
          0.05270707 = sum of:
            0.05270707 = weight(_text_:processing in 8453) [ClassicSimilarity], result of:
              0.05270707 = score(doc=8453,freq=2.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.35780904 = fieldWeight in 8453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8453)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Information processing and management. 32(1996) no.4, S.431-443
  13. Kwok, K.L.: ¬A network approach to probabilistic information retrieval (1995) 0.00
    0.0030148462 = product of:
      0.02411877 = sum of:
        0.02411877 = weight(_text_:data in 5696) [ClassicSimilarity], result of:
          0.02411877 = score(doc=5696,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.2096163 = fieldWeight in 5696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5696)
      0.125 = coord(1/8)
    
    Abstract
    Shows how probabilistic information retrieval based on document components may be implemented as a feedforward (feedbackward) artificial neural network. The network supports adaptation of connection weights as well as the growing of new edges between queries and terms based on user relevance feedback data for training, and it reflects query modification and expansion in information retrieval. A learning rule is applied that can also be viewed as supporting sequential learning using a harmonic sequence learning rate. Experimental results with 4 standard small collections and a large Wall Street Journal collection show that small query expansion levels of about 30 terms can achieve most of the gains at the low-recall high-precision region, while larger expansion levels continue to provide gains at the high-recall low-precision region of a precision recall curve
  14. Poynder, R.: Web research engines? (1996) 0.00
    0.0030148462 = product of:
      0.02411877 = sum of:
        0.02411877 = weight(_text_:data in 5698) [ClassicSimilarity], result of:
          0.02411877 = score(doc=5698,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.2096163 = fieldWeight in 5698, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
      0.125 = coord(1/8)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  15. Fowler, R.H.; Wilson, B.A.; Fowler, W.A.L.: Information navigator : an information system using associative networks for display and retrieval (1992) 0.00
    0.0030148462 = product of:
      0.02411877 = sum of:
        0.02411877 = weight(_text_:data in 919) [ClassicSimilarity], result of:
          0.02411877 = score(doc=919,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.2096163 = fieldWeight in 919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
      0.125 = coord(1/8)
    
    Abstract
    Document retrieval is a highly interactive process dealing with large amounts of information. Visual representations can provide both a means for managing the complexity of large information structures and an interface style well suited to interactive manipulation. The system we have designed utilizes visually displayed graphic structures and a direct manipulation interface style to supply an integrated environment for retrieval. A common visually displayed network structure is used for query, document content, and term relations. A query can be modified through direct manipulation of its visual form by incorporating terms from any other information structure the system displays. An associative thesaurus of terms and an inter-document network provide information about a document collection that can complement other retrieval aids. Visualization of these large data structures makes use of fisheye views and overview diagrams to help overcome some of the inherent difficulties of orientation and navigation in large information structures.
  16. Landauer, T.K.; Foltz, P.W.; Laham, D.: ¬An introduction to Latent Semantic Analysis (1998) 0.00
    0.0030148462 = product of:
      0.02411877 = sum of:
        0.02411877 = weight(_text_:data in 1162) [ClassicSimilarity], result of:
          0.02411877 = score(doc=1162,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.2096163 = fieldWeight in 1162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1162)
      0.125 = coord(1/8)
    
    Abstract
    Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text (Landauer and Dumais, 1997). The underlying idea is that the aggregate of all the word contexts in which a given word does and does not appear provides a set of mutual constraints that largely determines the similarity of meaning of words and sets of words to each other. The adequacy of LSA's reflection of human knowledge has been established in a variety of ways. For example, its scores overlap those of humans on standard vocabulary and subject matter tests; it mimics human word sorting and category judgments; it simulates word-word and passage-word lexical priming data; and as reported in 3 following articles in this issue, it accurately estimates passage coherence, learnability of passages by individual students, and the quality and quantity of knowledge contained in an essay.
  17. Hancock-Beaulieu, M.: Evaluating the impact of an online library catalogue on subject searching behaviour at the catalogue and at the shelves (1990) 0.00
    0.0025123719 = product of:
      0.020098975 = sum of:
        0.020098975 = weight(_text_:data in 5691) [ClassicSimilarity], result of:
          0.020098975 = score(doc=5691,freq=2.0), product of:
            0.115061514 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03638826 = queryNorm
            0.17468026 = fieldWeight in 5691, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5691)
      0.125 = coord(1/8)
    
    Abstract
    The second half of a 'before and after' study to evaluate the impact of an online catalogue on subject searching behaviour is reported. A holistic approach is adopted encompassing both catalogue use and browsing at the shelves for catalogue users and non-users. Verbal and non-verbal data were elicited from searchers using a combined methodology including talk-aloud technique, observation and a screen logging facility. An extensive qualitative analysis was carried out correlating expressed topics, search formulation strategies and documents retrieved at the shelves. The online catalogue environment does not appear to have increased the extent of subject searching nor the use of the bibliographic tool. The manual PRECIS index supported a contextual approach for broad and more interactive search formulations whereas the OPAC encouraged a matching approach and narrow formulations with fewer but user generated formulations. The success rate of the online catalogue was slightly better than that of the manual tools but fewer items were retrieved at the shelves. Non-users of the bibliographic tools seemed to be just as successful. To improve retrieval effectiveness it is suggested that online catalogues should cater for both matching and contextual approaches to searching. Recent research indicates that a more interactive process could be promoted by providing query expansion through a combination of searching aids for matching, for search formulation assistance and for structured contextual retrieval
  18. Buckley, C.; Allan, J.; Salton, G.: Automatic routing and retrieval using Smart : TREC-2 (1995) 0.00
    0.0024706437 = product of:
      0.01976515 = sum of:
        0.01976515 = product of:
          0.0395303 = sum of:
            0.0395303 = weight(_text_:processing in 5699) [ClassicSimilarity], result of:
              0.0395303 = score(doc=5699,freq=2.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.26835677 = fieldWeight in 5699, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5699)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Information processing and management. 31(1995) no.3, S.315-326
  19. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.00
    0.0024706437 = product of:
      0.01976515 = sum of:
        0.01976515 = product of:
          0.0395303 = sum of:
            0.0395303 = weight(_text_:processing in 5704) [ClassicSimilarity], result of:
              0.0395303 = score(doc=5704,freq=2.0), product of:
                0.14730503 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03638826 = queryNorm
                0.26835677 = fieldWeight in 5704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5704)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
  20. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.00
    0.0024650535 = product of:
      0.019720428 = sum of:
        0.019720428 = product of:
          0.039440855 = sum of:
            0.039440855 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
              0.039440855 = score(doc=5693,freq=2.0), product of:
                0.12742549 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03638826 = queryNorm
                0.30952093 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    30. 3.2001 13:35:22