Search (62 results, page 1 of 4)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  • × year_i:[1990 TO 2000}
  1. Lund, K.; Burgess, C.; Atchley, R.A.: Semantic and associative priming in high-dimensional semantic space (1995) 0.07
    0.06778517 = product of:
      0.10167775 = sum of:
        0.024176367 = weight(_text_:of in 2151) [ClassicSimilarity], result of:
          0.024176367 = score(doc=2151,freq=12.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.29624295 = fieldWeight in 2151, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2151)
        0.07750139 = sum of:
          0.028005775 = weight(_text_:science in 2151) [ClassicSimilarity], result of:
            0.028005775 = score(doc=2151,freq=2.0), product of:
              0.13747036 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.05218836 = queryNorm
              0.20372227 = fieldWeight in 2151, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2151)
          0.049495615 = weight(_text_:22 in 2151) [ClassicSimilarity], result of:
            0.049495615 = score(doc=2151,freq=2.0), product of:
              0.18275474 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05218836 = queryNorm
              0.2708308 = fieldWeight in 2151, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2151)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a model of semantic memory that utilizes a high dimensional semantic space constructed from a co-occurrence matrix. This matrix was formed by analyzing a lot) million word corpus. Word vectors were then obtained by extracting rows and columns of this matrix, These vectors were subjected to multidimensional scaling. Words were found to cluster semantically. suggesting that interword distance may be interpretable as a measure of semantic similarity, In attempting to replicate with our simulation the semantic and ...
    Source
    Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society: July 22 - 25, 1995, University of Pittsburgh / ed. by Johanna D. Moore and Jill Fain Lehmann
  2. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.04
    0.035670638 = product of:
      0.053505957 = sum of:
        0.025222747 = weight(_text_:of in 5693) [ClassicSimilarity], result of:
          0.025222747 = score(doc=5693,freq=10.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3090647 = fieldWeight in 5693, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5693)
        0.028283209 = product of:
          0.056566417 = sum of:
            0.056566417 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
              0.056566417 = score(doc=5693,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.30952093 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The process of term selection for query expansion by end-users is discussed within the context of a study of interactive query expansion in a relevance feedback environment. This user study focuses on how users' perceive and understand term relationships, such as hierarchical and associative relationships, in their searches
    Date
    30. 3.2001 13:35:22
    Source
    Knowledge organization and quality management: Proc. of the 3rd International ISKO Conference, 20-24 June 1994, Copenhagen, Denmark. Ed.: H. Albrechtsen et al
  3. Fieldhouse, M.; Hancock-Beaulieu, M.: ¬The design of a graphical user interface for a highly interactive information retrieval system (1996) 0.04
    0.035109516 = product of:
      0.052664272 = sum of:
        0.027916465 = weight(_text_:of in 6958) [ClassicSimilarity], result of:
          0.027916465 = score(doc=6958,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34207192 = fieldWeight in 6958, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6958)
        0.024747808 = product of:
          0.049495615 = sum of:
            0.049495615 = weight(_text_:22 in 6958) [ClassicSimilarity], result of:
              0.049495615 = score(doc=6958,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.2708308 = fieldWeight in 6958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6958)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports on the design of a GUI for the Okapi 'best match' retrieval system developed at the Centre for Interactive Systems Research, City University, UK, for online library catalogues. The X-Windows interface includes an interactive query expansion (IQE) facilty which involves the user in the selection of query terms to reformulate a search. Presents the design rationale, based on a game board metaphor, and describes the features of each of the stages of the search interaction. Reports on the early operational field trial and discusses relevant evaluation issues and objectives
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.03
    0.029088955 = product of:
      0.04363343 = sum of:
        0.027630134 = weight(_text_:of in 5689) [ClassicSimilarity], result of:
          0.027630134 = score(doc=5689,freq=12.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.33856338 = fieldWeight in 5689, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5689)
        0.0160033 = product of:
          0.0320066 = sum of:
            0.0320066 = weight(_text_:science in 5689) [ClassicSimilarity], result of:
              0.0320066 = score(doc=5689,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.23282544 = fieldWeight in 5689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5689)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
    Source
    Journal of information science. 18(1992) no.2, S.139-147
  5. Chen, H.; Zhang, Y.; Houston, A.L.: Semantic indexing and searching using a Hopfield net (1998) 0.03
    0.028235972 = product of:
      0.042353958 = sum of:
        0.025379896 = weight(_text_:of in 5704) [ClassicSimilarity], result of:
          0.025379896 = score(doc=5704,freq=18.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3109903 = fieldWeight in 5704, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5704)
        0.016974064 = product of:
          0.033948127 = sum of:
            0.033948127 = weight(_text_:science in 5704) [ClassicSimilarity], result of:
              0.033948127 = score(doc=5704,freq=4.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.24694869 = fieldWeight in 5704, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5704)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents a neural network approach to document semantic indexing. Reports results of a study to apply a Hopfield net algorithm to simulate human associative memory for concept exploration in the domain of computer science and engineering. The INSPEC database, consisting of 320.000 abstracts from leading periodical articles was used as the document test bed. Benchmark tests conformed that 3 parameters: maximum number of activated nodes; maximum allowable error; and maximum number of iterations; were useful in positively influencing network convergence behaviour without negatively impacting central processing unit performance. Another series of benchmark tests was performed to determine the effectiveness of various filtering techniques in reducing the negative impact of noisy input terms. Preliminary user tests conformed expectations that the Hopfield net is potentially useful as an associative memory technique to improve document recall and precision by solving discrepancies between indexer vocabularies and end user vocabularies
    Source
    Journal of information science. 24(1998) no.1, S.3-18
  6. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.03
    0.028065886 = product of:
      0.042098828 = sum of:
        0.02442182 = weight(_text_:of in 5697) [ClassicSimilarity], result of:
          0.02442182 = score(doc=5697,freq=24.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2992506 = fieldWeight in 5697, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.017677005 = product of:
          0.03535401 = sum of:
            0.03535401 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
              0.03535401 = score(doc=5697,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.19345059 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5697)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
  7. Nagao, M.: Knowledge and inference (1990) 0.03
    0.0278306 = product of:
      0.0417459 = sum of:
        0.02442182 = weight(_text_:of in 3304) [ClassicSimilarity], result of:
          0.02442182 = score(doc=3304,freq=24.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.2992506 = fieldWeight in 3304, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3304)
        0.01732408 = product of:
          0.03464816 = sum of:
            0.03464816 = weight(_text_:science in 3304) [ClassicSimilarity], result of:
              0.03464816 = score(doc=3304,freq=6.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.25204095 = fieldWeight in 3304, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3304)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of ""knowledge"" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intelligence: search and problem solving, methods of making proofs, and the use of knowledge in looking for a proof. There is also a discussion of how to use the knowledge system. The final chapter describes a popular expert system. It describes tools for building expert systems using an example based on Expert Systems-A Practical Introduction by P. Sell (Macmillian, 1985). This type of software is called an ""expert system shell."" This book was written as a textbook for undergraduate students covering only the basics but explaining as much detail as possible.
    LCSH
    Knowledge, Theory of
    Subject
    Knowledge, Theory of
  8. Efthimiadis, E.N.: Query expansion (1996) 0.03
    0.02748403 = product of:
      0.041226044 = sum of:
        0.025222747 = weight(_text_:of in 4847) [ClassicSimilarity], result of:
          0.025222747 = score(doc=4847,freq=10.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.3090647 = fieldWeight in 4847, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4847)
        0.0160033 = product of:
          0.0320066 = sum of:
            0.0320066 = weight(_text_:science in 4847) [ClassicSimilarity], result of:
              0.0320066 = score(doc=4847,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.23282544 = fieldWeight in 4847, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4847)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    State of the art review of query expansion (or term expansion) as the process of supplementing the original query with additional terms in order to improve retrieval performance. Research in the subject is presented in a highly structured way and is presented according to 3 types of query expansion; manual query expansion; automatic query expansion; and interactive query expansion
    Source
    Annual review of information science and technology. 31(1996), S.121-187
  9. Hancock-Beaulieu, M.: Query expansion : advances in research in online catalogues (1992) 0.03
    0.026629638 = product of:
      0.039944455 = sum of:
        0.019940332 = weight(_text_:of in 2351) [ClassicSimilarity], result of:
          0.019940332 = score(doc=2351,freq=4.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.24433708 = fieldWeight in 2351, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2351)
        0.020004123 = product of:
          0.040008247 = sum of:
            0.040008247 = weight(_text_:science in 2351) [ClassicSimilarity], result of:
              0.040008247 = score(doc=2351,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.2910318 = fieldWeight in 2351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2351)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Query expansion is the process of supplementing or replacing the original query terms with additional terms either at the search formulation or search reformulation stages. Different approaches to implementing query expansion are considered in three online catalogs
    Source
    Journal of information science. 18(1992), S.99-103
  10. Järvelin, K.; Niemi, T.: Deductive information retrieval based on classifications (1993) 0.03
    0.025836824 = product of:
      0.038755234 = sum of:
        0.02675276 = weight(_text_:of in 2229) [ClassicSimilarity], result of:
          0.02675276 = score(doc=2229,freq=20.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.32781258 = fieldWeight in 2229, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2229)
        0.012002475 = product of:
          0.02400495 = sum of:
            0.02400495 = weight(_text_:science in 2229) [ClassicSimilarity], result of:
              0.02400495 = score(doc=2229,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.17461908 = fieldWeight in 2229, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2229)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Modern fact databses contain abundant data classified through several classifications. Typically, users msut consult these classifications in separate manuals or files, thus making their effective use difficult. Contemporary database systems do little support deductive use of classifications. In this study we show how deductive data management techniques can be applied to the utilization of data value classifications. Computation of transitive class relationships is of primary importance here. We define a representation of classifications which supports transitive computation and present an operation-oriented deductive query language tailored for classification-based deductive information retrieval. The operations of this language are on the same abstraction level as relational algebra operations and can be integrated with these to form a powerful and flexible query language for deductive information retrieval. We define the integration of these operations and demonstrate the usefulness of the language in terms of several sample queries
    Source
    Journal of the American Society for Information Science. 44(1993) no.10, S.557-578
  11. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.03
    0.025804028 = product of:
      0.038706042 = sum of:
        0.0139582325 = weight(_text_:of in 1319) [ClassicSimilarity], result of:
          0.0139582325 = score(doc=1319,freq=4.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.17103596 = fieldWeight in 1319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1319)
        0.024747808 = product of:
          0.049495615 = sum of:
            0.049495615 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.049495615 = score(doc=1319,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  12. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.02
    0.022117738 = product of:
      0.033176605 = sum of:
        0.011964198 = weight(_text_:of in 2230) [ClassicSimilarity], result of:
          0.011964198 = score(doc=2230,freq=4.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.14660224 = fieldWeight in 2230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2230)
        0.021212406 = product of:
          0.042424813 = sum of:
            0.042424813 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.042424813 = score(doc=2230,freq=2.0), product of:
                0.18275474 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05218836 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present a deductive data model for concept-based query expansion. It is based on three abstraction levels: the conceptual, linguistic and occurrence levels. Concepts and relationships among them are represented at the conceptual level. The expression level represents natural language expressions for concepts. Each expression has one or more matching models at the occurrence level. Each model specifies the matching of the expression in database indices built in varying ways. The data model supports a concept-based query expansion and formulation tool, the ExpansionTool, for environments providing heterogeneous IR systems. Expansion is controlled by adjustable matching reliability.
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  13. Schwartz, C.: Web search engines (1998) 0.02
    0.021816716 = product of:
      0.032725073 = sum of:
        0.020722598 = weight(_text_:of in 5700) [ClassicSimilarity], result of:
          0.020722598 = score(doc=5700,freq=12.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.25392252 = fieldWeight in 5700, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5700)
        0.012002475 = product of:
          0.02400495 = sum of:
            0.02400495 = weight(_text_:science in 5700) [ClassicSimilarity], result of:
              0.02400495 = score(doc=5700,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.17461908 = fieldWeight in 5700, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5700)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This reviews looks briefly at the history of WWW search engine development, considers the current state of affairs, and reflects on the future. Networked discovery tools have evolved along with Internet resource availability. WWW search engines display some complexity in their variety, content, resource acquisition strategies, and in the array of tools the deploy to assist users. A small but growing body of evaluation literature, much of it not systematic in nature, indicates that performance effectiveness is difficult to assess in this setting. Significant improvements in general-content search engine retrieval and ranking performance may not be possible, and are probalby not worth the effort, although search engine providers have introduced some rudimentary attempts at personalization, summarization, and query expansion. The shift to distributed search across multitype database systems could extend general networked discovery and retrieval to include smaller resource collections with rich metadata and navigation tools
    Source
    Journal of the American Society for Information Science. 49(1998) no.11, S.973-982
  14. Tseng, Y.-H.: Solving vocabulary problems with interactive query expansion (1998) 0.02
    0.019103024 = product of:
      0.028654534 = sum of:
        0.018652473 = weight(_text_:of in 5159) [ClassicSimilarity], result of:
          0.018652473 = score(doc=5159,freq=14.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.22855641 = fieldWeight in 5159, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5159)
        0.010002062 = product of:
          0.020004123 = sum of:
            0.020004123 = weight(_text_:science in 5159) [ClassicSimilarity], result of:
              0.020004123 = score(doc=5159,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.1455159 = fieldWeight in 5159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5159)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    One of the major causes of search failures in information retrieval systems is vocabulary mismatch. Presents a solution to the vocabulary problem through 2 strategies known as term suggestion (TS) and term relevance feedback (TRF). In TS, collection specific terms are extracted from the text collection. These terms and their frequencies constitute the keyword database for suggesting terms in response to users' queries. One effect of this term suggestion is that it functions as a dynamic directory if the query is a general term that contains broad meaning. In term relevance feedback, terms extracted from the top ranked documents retrieved from the previous query are shown to users for relevance feedback. In the experiment, interactive TS provides very high precision rates while achieving similar recall rates as n-gram matching. Local TRF achieves improvement in both precision and recall rate in a full text news database and degrades slightly in recall rate in bibliographic databases due to the very limited source of information for feedback. In terms of Rijsbergen's combined measure of recall and precision, both TS and TRF achieve better performance than n-gram matching, which implies that the greater improvement in precision rate compensates the slight degradation in recall rate for TS and TRF
    Source
    Journal of library and information science. 24(1998) no.1, S.1-18
  15. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.01
    0.0136416275 = product of:
      0.02046244 = sum of:
        0.008459966 = weight(_text_:of in 5202) [ClassicSimilarity], result of:
          0.008459966 = score(doc=5202,freq=2.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.103663445 = fieldWeight in 5202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.012002475 = product of:
          0.02400495 = sum of:
            0.02400495 = weight(_text_:science in 5202) [ClassicSimilarity], result of:
              0.02400495 = score(doc=5202,freq=2.0), product of:
                0.13747036 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.05218836 = queryNorm
                0.17461908 = fieldWeight in 5202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Journal of the American Society for Information Science. 49(1998) no.3, S.206-216
  16. Landauer, T.K.; Foltz, P.W.; Laham, D.: ¬An introduction to Latent Semantic Analysis (1998) 0.01
    0.010167614 = product of:
      0.03050284 = sum of:
        0.03050284 = weight(_text_:of in 1162) [ClassicSimilarity], result of:
          0.03050284 = score(doc=1162,freq=26.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.37376386 = fieldWeight in 1162, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1162)
      0.33333334 = coord(1/3)
    
    Abstract
    Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text (Landauer and Dumais, 1997). The underlying idea is that the aggregate of all the word contexts in which a given word does and does not appear provides a set of mutual constraints that largely determines the similarity of meaning of words and sets of words to each other. The adequacy of LSA's reflection of human knowledge has been established in a variety of ways. For example, its scores overlap those of humans on standard vocabulary and subject matter tests; it mimics human word sorting and category judgments; it simulates word-word and passage-word lexical priming data; and as reported in 3 following articles in this issue, it accurately estimates passage coherence, learnability of passages by individual students, and the quality and quantity of knowledge contained in an essay.
  17. Beaulieu, M.; Payne, A.; Do, T.; Jones, S.: ENQUIRE Okapi project (1996) 0.01
    0.009352845 = product of:
      0.028058534 = sum of:
        0.028058534 = weight(_text_:of in 3369) [ClassicSimilarity], result of:
          0.028058534 = score(doc=3369,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34381276 = fieldWeight in 3369, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3369)
      0.33333334 = coord(1/3)
    
    Abstract
    The ENQUIRE project forms part of a series of investigations on query expansion in the Okapi experimental text retrieval system. A configurable user interface was implemented as an evaluative tool and tested in two locations on two different databases: the library catalogue of The London Business SChool and the computing section of INSPEC. The system offered a range of possible strategies based on thesaural terms for reformulating queries. These could be initiated automatically by the system or interactively with the user. The formative phase of the evaluation established the appropriateness and usability of the interface as well as users' perceptions of the underlying functionality. The aim of the large scale field trial was to determine to what extent user would select thesaural terms suggested by the system to reformulate queries, and to evaluate the effectiveness of a new dynamic form of query expansion implemented for this project
  18. Spiteri, L.F.: ¬The essential elements of faceted thesauri (1999) 0.01
    0.009352845 = product of:
      0.028058534 = sum of:
        0.028058534 = weight(_text_:of in 5362) [ClassicSimilarity], result of:
          0.028058534 = score(doc=5362,freq=22.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34381276 = fieldWeight in 5362, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5362)
      0.33333334 = coord(1/3)
    
    Abstract
    The goal of this study is to evaluate, compare, and contrast how facet analysis is used to construct the systematic or faceted displays of a selection of information retrieval thesauri. More specifically, the study seeks to examine which principles of facet analysis are used in the thesauri, and the extent to which different thesauri apply these principles in the same way. A measuring instrument was designed for the purpose of evaluating the structure of faceted thesauri. This instrument was applied to fourteen faceted information retrieval thesauri. The study reveals that the thesauri do not share a common definition of what constitutes a facet. In some cases, the thesauri apply both enumerative-style classification and facet analysis to arrange their indexing terms. A number of the facets used in the thesauri are not homogeneous or mutually exclusive. The principle of synthesis is used in only 50% of the thesauri, and no one citation order is used consistently by the thesauri.
  19. Lobin, H.; Witt, A.: Semantic and thematic navigation in electronic encyclopedias (1999) 0.01
    0.009305488 = product of:
      0.027916465 = sum of:
        0.027916465 = weight(_text_:of in 624) [ClassicSimilarity], result of:
          0.027916465 = score(doc=624,freq=16.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.34207192 = fieldWeight in 624, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=624)
      0.33333334 = coord(1/3)
    
    Abstract
    In the field of electronic publishing, encyclopedias represent a unique sort of text for investigating advanced methods of navigation. The user of an electronic excyclopedia normally expects special methods for accessing the entries in an encyclopedia database. Navigation through printed encyclopedias in the traditional sense focuses on the alphabetic order of the entries. In electronic encyclopedias, however, thematic structuring of lemmas and, of course, extensive (hyper-) linking mechanisms have been added. This paper will focus on showing developments, which go beyond these navigational strucutres. We will concentrate on the semantic space formed by lemmas to build a network of semantic distances and thematic trails through the encyclopedia
  20. Robertson, A.M.; Willett, P.: Applications of n-grams in textual information systems (1998) 0.01
    0.0092100445 = product of:
      0.027630134 = sum of:
        0.027630134 = weight(_text_:of in 4715) [ClassicSimilarity], result of:
          0.027630134 = score(doc=4715,freq=12.0), product of:
            0.08160993 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.05218836 = queryNorm
            0.33856338 = fieldWeight in 4715, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4715)
      0.33333334 = coord(1/3)
    
    Abstract
    Provides an introduction to the use of n-grams in textual information systems, where an n-gram is a string of n, usually adjacent, characters, extracted from a section of continuous text. Applications that can be implemented efficiently and effectively using sets of n-grams include spelling errors detection and correction, query expansion, information retrieval with serial, inverted and signature files, dictionary look up, text compression, and language identification
    Source
    Journal of documentation. 54(1998) no.1, S.48-69

Languages

  • e 58
  • chi 1
  • d 1
  • f 1
  • More… Less…

Types

  • a 54
  • el 5
  • r 4
  • m 3
  • p 1
  • More… Less…

Classifications