Search (362 results, page 2 of 19)

  • × theme_ss:"Retrievalalgorithmen"
  1. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.02
    0.022984518 = product of:
      0.05746129 = sum of:
        0.013485395 = weight(_text_:a in 2134) [ClassicSimilarity], result of:
          0.013485395 = score(doc=2134,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 2134, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2134)
        0.043975897 = product of:
          0.087951794 = sum of:
            0.087951794 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.087951794 = score(doc=2134,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    30. 3.2001 13:32:22
    Type
    a
  2. Furner, J.: ¬A unifying model of document relatedness for hybrid search engines (2003) 0.02
    0.022135837 = product of:
      0.055339593 = sum of:
        0.008173384 = weight(_text_:a in 2717) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2717,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2717, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2717)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 2717) [ClassicSimilarity], result of:
            0.009472587 = score(doc=2717,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 2717, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
          0.037693623 = weight(_text_:22 in 2717) [ClassicSimilarity], result of:
            0.037693623 = score(doc=2717,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 2717, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
      0.4 = coord(2/5)
    
    Abstract
    Previous work an search-engine design has indicated that information-seekers may benefit from being given the opportunity to exploit multiple sources of evidence of document relatedness. Few existing systems, however, give users more than minimal control over the selections that may be made among methods of exploitation. By applying the methods of "document network analysis" (DNA), a unifying, graph-theoretic model of content-, collaboration-, and context-based systems (CCC) may be developed in which the nature of the similarities between types of document relatedness and document ranking are clarified. The usefulness of the approach to system design suggested by this model may be tested by constructing and evaluating a prototype system (UCXtra) that allows searchers to maintain control over the multiple ways in which document collections may be ranked and re-ranked.
    Date
    11. 9.2004 17:32:22
    Type
    a
  3. Joss, M.W.; Wszola, S.: ¬The engines that can : text search and retrieval software, their strategies, and vendors (1996) 0.02
    0.021178266 = product of:
      0.052945666 = sum of:
        0.005779455 = weight(_text_:a in 5123) [ClassicSimilarity], result of:
          0.005779455 = score(doc=5123,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 5123, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5123)
        0.04716621 = sum of:
          0.009472587 = weight(_text_:information in 5123) [ClassicSimilarity], result of:
            0.009472587 = score(doc=5123,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.116372846 = fieldWeight in 5123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=5123)
          0.037693623 = weight(_text_:22 in 5123) [ClassicSimilarity], result of:
            0.037693623 = score(doc=5123,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.23214069 = fieldWeight in 5123, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5123)
      0.4 = coord(2/5)
    
    Abstract
    Traces the development of text searching and retrieval software designed to cope with the increasing demands made by the storage and handling of large amounts of data, recorded on high data storage media, from CD-ROM to multi gigabyte storage media and online information services, with particular reference to the need to cope with graphics as well as conventional ASCII text. Includes details of: Boolean searching, fuzzy searching and matching; relevance ranking; proximity searching and improved strategies for dealing with text searching in very large databases. Concludes that the best searching tools for CD-ROM publishers are those optimized for searching and retrieval on CD-ROM. CD-ROM drives have relatively lower random seek times than hard discs and so the software most appropriate to the medium is that which can effectively arrange the indexes and text on the CD-ROM to avoid continuous random access searching. Lists and reviews a selection of software packages designed to achieve the sort of results required for rapid CD-ROM searching
    Date
    12. 9.1996 13:56:22
    Type
    a
  4. Burgin, R.: ¬The retrieval effectiveness of 5 clustering algorithms as a function of indexing exhaustivity (1995) 0.02
    0.019326193 = product of:
      0.048315484 = sum of:
        0.009010308 = weight(_text_:a in 3365) [ClassicSimilarity], result of:
          0.009010308 = score(doc=3365,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 3365, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3365)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 3365) [ClassicSimilarity], result of:
            0.007893822 = score(doc=3365,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 3365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3365)
          0.031411353 = weight(_text_:22 in 3365) [ClassicSimilarity], result of:
            0.031411353 = score(doc=3365,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 3365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3365)
      0.4 = coord(2/5)
    
    Abstract
    The retrieval effectiveness of 5 hierarchical clustering methods (single link, complete link, group average, Ward's method, and weighted average) is examined as a function of indexing exhaustivity with 4 test collections (CR, Cranfield, Medlars, and Time). Evaluations of retrieval effectiveness, based on 3 measures of optimal retrieval performance, confirm earlier findings that the performance of a retrieval system based on single link clustering varies as a function of indexing exhaustivity but fail ti find similar patterns for other clustering methods. The data also confirm earlier findings regarding the poor performance of single link clustering is a retrieval environment. The poor performance of single link clustering appears to derive from that method's tendency to produce a small number of large, ill defined document clusters. By contrast, the data examined here found the retrieval performance of the other clustering methods to be general comparable. The data presented also provides an opportunity to examine the theoretical limits of cluster based retrieval and to compare these theoretical limits to the effectiveness of operational implementations. Performance standards of the 4 document collections examined were found to vary widely, and the effectiveness of operational implementations were found to be in the range defined as unacceptable. Further improvements in search strategies and document representations warrant investigations
    Date
    22. 2.1996 11:20:06
    Source
    Journal of the American Society for Information Science. 46(1995) no.8, S.562-572
    Type
    a
  5. Shiri, A.A.; Revie, C.: Query expansion behavior within a thesaurus-enhanced search environment : a user-centered evaluation (2006) 0.02
    0.01905884 = product of:
      0.0476471 = sum of:
        0.008341924 = weight(_text_:a in 56) [ClassicSimilarity], result of:
          0.008341924 = score(doc=56,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 56, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=56)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 56) [ClassicSimilarity], result of:
            0.007893822 = score(doc=56,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
          0.031411353 = weight(_text_:22 in 56) [ClassicSimilarity], result of:
            0.031411353 = score(doc=56,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
      0.4 = coord(2/5)
    
    Abstract
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.
    Date
    22. 7.2006 16:32:43
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.4, S.462-478
    Type
    a
  6. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.02
    0.018768111 = product of:
      0.046920277 = sum of:
        0.0076151006 = weight(_text_:a in 5697) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=5697,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 5697, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5697)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 5697) [ClassicSimilarity], result of:
            0.007893822 = score(doc=5697,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
          0.031411353 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
            0.031411353 = score(doc=5697,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
      0.4 = coord(2/5)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.605-620
    Type
    a
  7. Baloh, P.; Desouza, K.C.; Hackney, R.: Contextualizing organizational interventions of knowledge management systems : a design science perspectiveA domain analysis (2012) 0.02
    0.018768111 = product of:
      0.046920277 = sum of:
        0.0076151006 = weight(_text_:a in 241) [ClassicSimilarity], result of:
          0.0076151006 = score(doc=241,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14243183 = fieldWeight in 241, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=241)
        0.039305177 = sum of:
          0.007893822 = weight(_text_:information in 241) [ClassicSimilarity], result of:
            0.007893822 = score(doc=241,freq=2.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.09697737 = fieldWeight in 241, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=241)
          0.031411353 = weight(_text_:22 in 241) [ClassicSimilarity], result of:
            0.031411353 = score(doc=241,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 241, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=241)
      0.4 = coord(2/5)
    
    Abstract
    We address how individuals' (workers) knowledge needs influence the design of knowledge management systems (KMS), enabling knowledge creation and utilization. It is evident that KMS technologies and activities are indiscriminately deployed in most organizations with little regard to the actual context of their adoption. Moreover, it is apparent that the extant literature pertaining to knowledge management projects is frequently deficient in identifying the variety of factors indicative for successful KMS. This presents an obvious business practice and research gap that requires a critical analysis of the necessary intervention that will actually improve how workers can leverage and form organization-wide knowledge. This research involved an extensive review of the literature, a grounded theory methodological approach and rigorous data collection and synthesis through an empirical case analysis (Parsons Brinckerhoff and Samsung). The contribution of this study is the formulation of a model for designing KMS based upon the design science paradigm, which aspires to create artifacts that are interdependent of people and organizations. The essential proposition is that KMS design and implementation must be contextualized in relation to knowledge needs and that these will differ for various organizational settings. The findings present valuable insights and further understanding of the way in which KMS design efforts should be focused.
    Date
    11. 6.2012 14:22:34
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.5, S.948-966
    Type
    a
  8. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.02
    0.018346803 = product of:
      0.045867007 = sum of:
        0.008173384 = weight(_text_:a in 58) [ClassicSimilarity], result of:
          0.008173384 = score(doc=58,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 58, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=58)
        0.037693623 = product of:
          0.07538725 = sum of:
            0.07538725 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.07538725 = score(doc=58,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    14. 6.2015 22:12:44
    Type
    a
  9. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.02
    0.018346803 = product of:
      0.045867007 = sum of:
        0.008173384 = weight(_text_:a in 2051) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2051,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=2051)
        0.037693623 = product of:
          0.07538725 = sum of:
            0.07538725 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.07538725 = score(doc=2051,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    14. 6.2015 22:12:56
    Type
    a
  10. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.013134009 = product of:
      0.03283502 = sum of:
        0.00770594 = weight(_text_:a in 5108) [ClassicSimilarity], result of:
          0.00770594 = score(doc=5108,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14413087 = fieldWeight in 5108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.050258167 = score(doc=5108,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    20. 1.2007 18:30:22
    Type
    a
  11. Cole, C.: Intelligent information retrieval: diagnosing information need : Part II: uncertainty expansion in a prototype of a diagnostic IR tool (1998) 0.01
    0.012225488 = product of:
      0.03056372 = sum of:
        0.014156716 = weight(_text_:a in 6432) [ClassicSimilarity], result of:
          0.014156716 = score(doc=6432,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.26478532 = fieldWeight in 6432, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=6432)
        0.016407004 = product of:
          0.032814007 = sum of:
            0.032814007 = weight(_text_:information in 6432) [ClassicSimilarity], result of:
              0.032814007 = score(doc=6432,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.40312737 = fieldWeight in 6432, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6432)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 34(1998) no.6, S.721-731
    Type
    a
  12. Perry, R.; Willett, P.: ¬A revies of the use of inverted files for best match searching in information retrieval systems (1983) 0.01
    0.011645746 = product of:
      0.029114366 = sum of:
        0.013485395 = weight(_text_:a in 2701) [ClassicSimilarity], result of:
          0.013485395 = score(doc=2701,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 2701, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=2701)
        0.015628971 = product of:
          0.031257942 = sum of:
            0.031257942 = weight(_text_:information in 2701) [ClassicSimilarity], result of:
              0.031257942 = score(doc=2701,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3840108 = fieldWeight in 2701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2701)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of information science. 6(1983), S.59-66
    Type
    a
  13. Salton, G.: ¬A simple blueprint for automatic Boolean query processing (1988) 0.01
    0.011216799 = product of:
      0.028041996 = sum of:
        0.01541188 = weight(_text_:a in 6774) [ClassicSimilarity], result of:
          0.01541188 = score(doc=6774,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.28826174 = fieldWeight in 6774, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=6774)
        0.012630116 = product of:
          0.025260232 = sum of:
            0.025260232 = weight(_text_:information in 6774) [ClassicSimilarity], result of:
              0.025260232 = score(doc=6774,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3103276 = fieldWeight in 6774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.125 = fieldNorm(doc=6774)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 24(1988) no.3, S.269-280
    Type
    a
  14. Rada, R.; Bicknell, E.: Ranking documents with a thesaurus (1989) 0.01
    0.011216799 = product of:
      0.028041996 = sum of:
        0.01541188 = weight(_text_:a in 6908) [ClassicSimilarity], result of:
          0.01541188 = score(doc=6908,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.28826174 = fieldWeight in 6908, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=6908)
        0.012630116 = product of:
          0.025260232 = sum of:
            0.025260232 = weight(_text_:information in 6908) [ClassicSimilarity], result of:
              0.025260232 = score(doc=6908,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3103276 = fieldWeight in 6908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.125 = fieldNorm(doc=6908)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of the American Society for Information Science. 40(1989) no.5, S.304-310
    Type
    a
  15. Reddaway, S.: High speed text retrieval from large databases on a massively parallel processor (1991) 0.01
    0.011216799 = product of:
      0.028041996 = sum of:
        0.01541188 = weight(_text_:a in 7745) [ClassicSimilarity], result of:
          0.01541188 = score(doc=7745,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.28826174 = fieldWeight in 7745, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=7745)
        0.012630116 = product of:
          0.025260232 = sum of:
            0.025260232 = weight(_text_:information in 7745) [ClassicSimilarity], result of:
              0.025260232 = score(doc=7745,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3103276 = fieldWeight in 7745, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.125 = fieldNorm(doc=7745)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information processing and management. 27(1991), S.311-316
    Type
    a
  16. Aizawa, A.: ¬An information-theoretic perspective of tf-idf measures (2003) 0.01
    0.011061133 = product of:
      0.027652832 = sum of:
        0.012184162 = weight(_text_:a in 4155) [ClassicSimilarity], result of:
          0.012184162 = score(doc=4155,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22789092 = fieldWeight in 4155, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=4155)
        0.01546867 = product of:
          0.03093734 = sum of:
            0.03093734 = weight(_text_:information in 4155) [ClassicSimilarity], result of:
              0.03093734 = score(doc=4155,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38007212 = fieldWeight in 4155, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4155)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a mathematical definition of the "probability-weighted amount of information" (PWI), a measure of specificity of terms in documents that is based on an information-theoretic view of retrieval events. The proposed PWI is expressed as a product of the occurrence probabilities of terms and their amounts of information, and corresponds well with the conventional term frequency - inverse document frequency measures that are commonly used in today's information retrieval systems. The mathematical definition of the PWI is shown, together with some illustrative examples of the calculation.
    Source
    Information processing and management. 39(2003) no.1, S.45-65
    Type
    a
  17. Hofferer, M.: Heuristic search in information retrieval (1994) 0.01
    0.010987191 = product of:
      0.027467977 = sum of:
        0.013347079 = weight(_text_:a in 1070) [ClassicSimilarity], result of:
          0.013347079 = score(doc=1070,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.24964198 = fieldWeight in 1070, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1070)
        0.014120899 = product of:
          0.028241798 = sum of:
            0.028241798 = weight(_text_:information in 1070) [ClassicSimilarity], result of:
              0.028241798 = score(doc=1070,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3469568 = fieldWeight in 1070, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1070)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes an adaptive information retrieval system: Information Retrieval Algorithm System (IRAS); that uses heuristic searching to sample a document space and retrieve relevant documents according to users' requests; and also a learning module based on a knowledge representation system and an approximate probabilistic characterization of relevant documents; to reproduce a user classification of relevant documents and to provide a rule controlled ranking
    Source
    Information retrieval: new systems and current research. Proceedings of the 15th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Glasgow 1993. Ed.: Ruben Leon
    Type
    a
  18. Goffman, W.: ¬A searching procedure for information retrieval (1964) 0.01
    0.010539954 = product of:
      0.026349884 = sum of:
        0.01541188 = weight(_text_:a in 5281) [ClassicSimilarity], result of:
          0.01541188 = score(doc=5281,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.28826174 = fieldWeight in 5281, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5281)
        0.010938003 = product of:
          0.021876005 = sum of:
            0.021876005 = weight(_text_:information in 5281) [ClassicSimilarity], result of:
              0.021876005 = score(doc=5281,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2687516 = fieldWeight in 5281, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5281)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A search procedure for an information retrieval system is developed whereby the answer to a question is obtained by maximizing an evaluation function of the system's output in terms of the probility of relevance. Necessary and sufficient conditions are given for a set to be an answer to a query. A partition of the file is made in such way that all documents belonging to the answer are members of the same class. Hence the answer can be generated by one relevant document. In this manner a search of the total file is avoided
    Source
    Information storage and retrieval. 2(1964), S.73-78
    Type
    a
  19. Wollf, J.G.: ¬A scalable technique for best-match retrieval of sequential information using metrics-guided search (1994) 0.01
    0.010451393 = product of:
      0.026128482 = sum of:
        0.015077131 = weight(_text_:a in 5334) [ClassicSimilarity], result of:
          0.015077131 = score(doc=5334,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.28200063 = fieldWeight in 5334, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5334)
        0.011051352 = product of:
          0.022102704 = sum of:
            0.022102704 = weight(_text_:information in 5334) [ClassicSimilarity], result of:
              0.022102704 = score(doc=5334,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.27153665 = fieldWeight in 5334, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5334)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Describes a new technique for retrieving information by finding the best match or matches between a textual query and a textual database. The technique uses principles of beam search with a measure of probability to guide the search and prune the search tree. Unlike many methods for comparing strings, the method gives a set of alternative matches, graded by the quality of the matching. The new technique is embodies in a software simulation SP21 which runs on a conventional computer. Presnts examples showing best-match retrieval of information from a textual database. Presents analytic and emprirical evidence on the performance of the technique. It lends itself well to parallel processing. Discusses planned developments
    Source
    Journal of information science. 20(1994) no.1, S.16-28
    Type
    a
  20. Loughran, H.: ¬A review of nearest neighbour information retrieval (1994) 0.01
    0.010187908 = product of:
      0.025469769 = sum of:
        0.011797264 = weight(_text_:a in 616) [ClassicSimilarity], result of:
          0.011797264 = score(doc=616,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.22065444 = fieldWeight in 616, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=616)
        0.013672504 = product of:
          0.027345007 = sum of:
            0.027345007 = weight(_text_:information in 616) [ClassicSimilarity], result of:
              0.027345007 = score(doc=616,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.3359395 = fieldWeight in 616, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=616)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Explains the concept of 'nearest neighbour' searching, also known as best match or ranked output, which it is claimed can overcome many of the inadequacies of traditional Boolean methods. Also points to some of the limitations. Identifies a number of commercial information retrieval systems which feature this search technique
    Source
    Information management report. 1994, August, S.11-14
    Type
    a

Years

Languages

Types

  • a 337
  • m 12
  • el 8
  • s 5
  • r 3
  • p 2
  • x 2
  • More… Less…