Search (318 results, page 2 of 16)

  • × language_ss:"e"
  • × theme_ss:"Retrievalalgorithmen"
  1. Shiri, A.A.; Revie, C.: Query expansion behavior within a thesaurus-enhanced search environment : a user-centered evaluation (2006) 0.02
    0.017742215 = product of:
      0.03548443 = sum of:
        0.03548443 = sum of:
          0.0074461387 = weight(_text_:a in 56) [ClassicSimilarity], result of:
            0.0074461387 = score(doc=56,freq=12.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.15602624 = fieldWeight in 56, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
          0.028038291 = weight(_text_:22 in 56) [ClassicSimilarity], result of:
            0.028038291 = score(doc=56,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
      0.5 = coord(1/2)
    
    Abstract
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.
    Date
    22. 7.2006 16:32:43
    Type
    a
  2. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.02
    0.017417828 = product of:
      0.034835655 = sum of:
        0.034835655 = sum of:
          0.0067973635 = weight(_text_:a in 5697) [ClassicSimilarity], result of:
            0.0067973635 = score(doc=5697,freq=10.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.14243183 = fieldWeight in 5697, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
          0.028038291 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
            0.028038291 = score(doc=5697,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 5697, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5697)
      0.5 = coord(1/2)
    
    Abstract
    The performance of 8 ranking algorithms was evaluated with respect to their effectiveness in ranking terms for query expansion. The evaluation was conducted within an investigation of interactive query expansion and relevance feedback in a real operational environment. Focuses on the identification of algorithms that most effectively take cognizance of user preferences. user choices (i.e. the terms selected by the searchers for the query expansion search) provided the yardstick for the evaluation of the 8 ranking algorithms. This methodology introduces a user oriented approach in evaluating ranking algorithms for query expansion in contrast to the standard, system oriented approaches. Similarities in the performance of the 8 algorithms and the ways these algorithms rank terms were the main focus of this evaluation. The findings demonstrate that the r-lohi, wpq, enim, and porter algorithms have similar performance in bringing good terms to the top of a ranked list of terms for query expansion. However, further evaluation of the algorithms in different (e.g. full text) environments is needed before these results can be generalized beyond the context of the present study
    Date
    22. 2.1996 13:14:10
    Type
    a
  3. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.02
    0.017417828 = product of:
      0.034835655 = sum of:
        0.034835655 = sum of:
          0.0067973635 = weight(_text_:a in 1753) [ClassicSimilarity], result of:
            0.0067973635 = score(doc=1753,freq=10.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.14243183 = fieldWeight in 1753, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
          0.028038291 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
            0.028038291 = score(doc=1753,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 1753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
      0.5 = coord(1/2)
    
    Abstract
    This book offers a comprehensive and consistent mathematical approach to information retrieval (IR) without which no implementation is possible, and sheds an entirely new light upon the structure of IR models. It contains the descriptions of all IR models in a unified formal style and language, along with examples for each, thus offering a comprehensive overview of them. The book also creates mathematical foundations and a consistent mathematical theory (including all mathematical results achieved so far) of IR as a stand-alone mathematical discipline, which thus can be read and taught independently. Also, the book contains all necessary mathematical knowledge on which IR relies, to help the reader avoid searching different sources. The book will be of interest to computer or information scientists, librarians, mathematicians, undergraduate students and researchers whose work involves information retrieval.
    Date
    22. 3.2008 12:26:32
  4. Baloh, P.; Desouza, K.C.; Hackney, R.: Contextualizing organizational interventions of knowledge management systems : a design science perspectiveA domain analysis (2012) 0.02
    0.017417828 = product of:
      0.034835655 = sum of:
        0.034835655 = sum of:
          0.0067973635 = weight(_text_:a in 241) [ClassicSimilarity], result of:
            0.0067973635 = score(doc=241,freq=10.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.14243183 = fieldWeight in 241, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=241)
          0.028038291 = weight(_text_:22 in 241) [ClassicSimilarity], result of:
            0.028038291 = score(doc=241,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 241, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=241)
      0.5 = coord(1/2)
    
    Abstract
    We address how individuals' (workers) knowledge needs influence the design of knowledge management systems (KMS), enabling knowledge creation and utilization. It is evident that KMS technologies and activities are indiscriminately deployed in most organizations with little regard to the actual context of their adoption. Moreover, it is apparent that the extant literature pertaining to knowledge management projects is frequently deficient in identifying the variety of factors indicative for successful KMS. This presents an obvious business practice and research gap that requires a critical analysis of the necessary intervention that will actually improve how workers can leverage and form organization-wide knowledge. This research involved an extensive review of the literature, a grounded theory methodological approach and rigorous data collection and synthesis through an empirical case analysis (Parsons Brinckerhoff and Samsung). The contribution of this study is the formulation of a model for designing KMS based upon the design science paradigm, which aspires to create artifacts that are interdependent of people and organizations. The essential proposition is that KMS design and implementation must be contextualized in relation to knowledge needs and that these will differ for various organizational settings. The findings present valuable insights and further understanding of the way in which KMS design efforts should be focused.
    Date
    11. 6.2012 14:22:34
    Type
    a
  5. Khoo, C.S.G.; Wan, K.-W.: ¬A simple relevancy-ranking strategy for an interface to Boolean OPACs (2004) 0.01
    0.014803795 = product of:
      0.02960759 = sum of:
        0.02960759 = sum of:
          0.009980789 = weight(_text_:a in 2509) [ClassicSimilarity], result of:
            0.009980789 = score(doc=2509,freq=44.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.20913726 = fieldWeight in 2509, product of:
                6.6332498 = tf(freq=44.0), with freq of:
                  44.0 = termFreq=44.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2509)
          0.019626802 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
            0.019626802 = score(doc=2509,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.1354154 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2509)
      0.5 = coord(1/2)
    
    Abstract
    A relevancy-ranking algorithm for a natural language interface to Boolean online public access catalogs (OPACs) was formulated and compared with that currently used in a knowledge-based search interface called the E-Referencer, being developed by the authors. The algorithm makes use of seven weIl-known ranking criteria: breadth of match, section weighting, proximity of query words, variant word forms (stemming), document frequency, term frequency and document length. The algorithm converts a natural language query into a series of increasingly broader Boolean search statements. In a small experiment with ten subjects in which the algorithm was simulated by hand, the algorithm obtained good results with a mean overall precision of 0.42 and mean average precision of 0.62, representing a 27 percent improvement in precision and 41 percent improvement in average precision compared to the E-Referencer. The usefulness of each step in the algorithm was analyzed and suggestions are made for improving the algorithm.
    Content
    "Most Web search engines accept natural language queries, perform some kind of fuzzy matching and produce ranked output, displaying first the documents that are most likely to be relevant. On the other hand, most library online public access catalogs (OPACs) an the Web are still Boolean retrieval systems that perform exact matching, and require users to express their search requests precisely in a Boolean search language and to refine their search statements to improve the search results. It is well-documented that users have difficulty searching Boolean OPACs effectively (e.g. Borgman, 1996; Ensor, 1992; Wallace, 1993). One approach to making OPACs easier to use is to develop a natural language search interface that acts as a middleware between the user's Web browser and the OPAC system. The search interface can accept a natural language query from the user and reformulate it as a series of Boolean search statements that are then submitted to the OPAC. The records retrieved by the OPAC are ranked by the search interface before forwarding them to the user's Web browser. The user, then, does not need to interact directly with the Boolean OPAC but with the natural language search interface or search intermediary. The search interface interacts with the OPAC system an the user's behalf. The advantage of this approach is that no modification to the OPAC or library system is required. Furthermore, the search interface can access multiple OPACs, acting as a meta search engine, and integrate search results from various OPACs before sending them to the user. The search interface needs to incorporate a method for converting the user's natural language query into a series of Boolean search statements, and for ranking the OPAC records retrieved. The purpose of this study was to develop a relevancyranking algorithm for a search interface to Boolean OPAC systems. This is part of an on-going effort to develop a knowledge-based search interface to OPACs called the E-Referencer (Khoo et al., 1998, 1999; Poo et al., 2000). E-Referencer v. 2 that has been implemented applies a repertoire of initial search strategies and reformulation strategies to retrieve records from OPACs using the Z39.50 protocol, and also assists users in mapping query keywords to the Library of Congress subject headings."
    Source
    Electronic library. 22(2004) no.2, S.112-120
    Type
    a
  6. Rijsbergen, C.J. van: ¬A fast hierarchic clustering algorithm (1970) 0.00
    0.003439224 = product of:
      0.006878448 = sum of:
        0.006878448 = product of:
          0.013756896 = sum of:
            0.013756896 = weight(_text_:a in 3300) [ClassicSimilarity], result of:
              0.013756896 = score(doc=3300,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28826174 = fieldWeight in 3300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=3300)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Sparck Jones, K.: ¬A statistical interpretation of term specifity and its application in retrieval (1972) 0.00
    0.003439224 = product of:
      0.006878448 = sum of:
        0.006878448 = product of:
          0.013756896 = sum of:
            0.013756896 = weight(_text_:a in 5187) [ClassicSimilarity], result of:
              0.013756896 = score(doc=5187,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28826174 = fieldWeight in 5187, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=5187)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  8. Salton, G.: ¬A simple blueprint for automatic Boolean query processing (1988) 0.00
    0.003439224 = product of:
      0.006878448 = sum of:
        0.006878448 = product of:
          0.013756896 = sum of:
            0.013756896 = weight(_text_:a in 6774) [ClassicSimilarity], result of:
              0.013756896 = score(doc=6774,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28826174 = fieldWeight in 6774, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=6774)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Rada, R.; Bicknell, E.: Ranking documents with a thesaurus (1989) 0.00
    0.003439224 = product of:
      0.006878448 = sum of:
        0.006878448 = product of:
          0.013756896 = sum of:
            0.013756896 = weight(_text_:a in 6908) [ClassicSimilarity], result of:
              0.013756896 = score(doc=6908,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28826174 = fieldWeight in 6908, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=6908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  10. Reddaway, S.: High speed text retrieval from large databases on a massively parallel processor (1991) 0.00
    0.003439224 = product of:
      0.006878448 = sum of:
        0.006878448 = product of:
          0.013756896 = sum of:
            0.013756896 = weight(_text_:a in 7745) [ClassicSimilarity], result of:
              0.013756896 = score(doc=7745,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28826174 = fieldWeight in 7745, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=7745)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  11. Goffman, W.: ¬A searching procedure for information retrieval (1964) 0.00
    0.003439224 = product of:
      0.006878448 = sum of:
        0.006878448 = product of:
          0.013756896 = sum of:
            0.013756896 = weight(_text_:a in 5281) [ClassicSimilarity], result of:
              0.013756896 = score(doc=5281,freq=16.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28826174 = fieldWeight in 5281, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5281)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A search procedure for an information retrieval system is developed whereby the answer to a question is obtained by maximizing an evaluation function of the system's output in terms of the probility of relevance. Necessary and sufficient conditions are given for a set to be an answer to a query. A partition of the file is made in such way that all documents belonging to the answer are members of the same class. Hence the answer can be generated by one relevant document. In this manner a search of the total file is avoided
    Type
    a
  12. Wollf, J.G.: ¬A scalable technique for best-match retrieval of sequential information using metrics-guided search (1994) 0.00
    0.0033645234 = product of:
      0.006729047 = sum of:
        0.006729047 = product of:
          0.013458094 = sum of:
            0.013458094 = weight(_text_:a in 5334) [ClassicSimilarity], result of:
              0.013458094 = score(doc=5334,freq=20.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.28200063 = fieldWeight in 5334, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5334)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes a new technique for retrieving information by finding the best match or matches between a textual query and a textual database. The technique uses principles of beam search with a measure of probability to guide the search and prune the search tree. Unlike many methods for comparing strings, the method gives a set of alternative matches, graded by the quality of the matching. The new technique is embodies in a software simulation SP21 which runs on a conventional computer. Presnts examples showing best-match retrieval of information from a textual database. Presents analytic and emprirical evidence on the performance of the technique. It lends itself well to parallel processing. Discusses planned developments
    Type
    a
  13. Martin-Bautista, M.J.; Vila, M.-A.; Larsen, H.L.: ¬A fuzzy genetic algorithm approach to an adaptive information retrieval agent (1999) 0.00
    0.003159129 = product of:
      0.006318258 = sum of:
        0.006318258 = product of:
          0.012636516 = sum of:
            0.012636516 = weight(_text_:a in 3914) [ClassicSimilarity], result of:
              0.012636516 = score(doc=3914,freq=24.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.26478532 = fieldWeight in 3914, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3914)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present an approach to a Genetic Information Retrieval Agent filter (GIRAF) for documents from the Internet using a genetic algorithm (GA) with fuzzy set genes to learn the user's information needs. The population of chromosomes with fixed length represents such user's preferences. Each chromosome is associated with a fitness that may be considered the system's belief in the hypothesis that the chromosome, as a query, represents the user's information needs. In a chromosome, every gene characterizes documents by a keyword and an associated occurence frequency, represented by a certain type of a fuzzy subset of the set of positive integers. Based on the user's evaluation of the documents retrieved by the chromosome, compared to the scores computed by the system, the fitness of the chromosomes is adjusted. A prototype of GIRAF has been developed and tested. The results of the test are discussed, and some directions for further works are pointed out
    Type
    a
  14. Cole, C.: Intelligent information retrieval: diagnosing information need : Part II: uncertainty expansion in a prototype of a diagnostic IR tool (1998) 0.00
    0.003159129 = product of:
      0.006318258 = sum of:
        0.006318258 = product of:
          0.012636516 = sum of:
            0.012636516 = weight(_text_:a in 6432) [ClassicSimilarity], result of:
              0.012636516 = score(doc=6432,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.26478532 = fieldWeight in 6432, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6432)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  15. Perry, R.; Willett, P.: ¬A revies of the use of inverted files for best match searching in information retrieval systems (1983) 0.00
    0.0030093212 = product of:
      0.0060186423 = sum of:
        0.0060186423 = product of:
          0.012037285 = sum of:
            0.012037285 = weight(_text_:a in 2701) [ClassicSimilarity], result of:
              0.012037285 = score(doc=2701,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.25222903 = fieldWeight in 2701, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2701)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Schiminovich, S.: Automatic classification and retrieval of documents by means of a bibliographic pattern discovery algorithm (1971) 0.00
    0.0030093212 = product of:
      0.0060186423 = sum of:
        0.0060186423 = product of:
          0.012037285 = sum of:
            0.012037285 = weight(_text_:a in 4846) [ClassicSimilarity], result of:
              0.012037285 = score(doc=4846,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.25222903 = fieldWeight in 4846, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  17. Ciocca, G.; Schettini, R.: ¬A relevance feedback mechanism for content-based image retrieval (1999) 0.00
    0.0030093212 = product of:
      0.0060186423 = sum of:
        0.0060186423 = product of:
          0.012037285 = sum of:
            0.012037285 = weight(_text_:a in 6498) [ClassicSimilarity], result of:
              0.012037285 = score(doc=6498,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.25222903 = fieldWeight in 6498, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6498)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  18. Bovey, J.D.; Robertson, S.E.: ¬An algorithm for weighted searching on a Boolean system (1984) 0.00
    0.0030093212 = product of:
      0.0060186423 = sum of:
        0.0060186423 = product of:
          0.012037285 = sum of:
            0.012037285 = weight(_text_:a in 788) [ClassicSimilarity], result of:
              0.012037285 = score(doc=788,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.25222903 = fieldWeight in 788, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=788)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  19. Aho, A.; Corasick, M.: Efficient string matching : an aid to bibliographic search (1975) 0.00
    0.0030093212 = product of:
      0.0060186423 = sum of:
        0.0060186423 = product of:
          0.012037285 = sum of:
            0.012037285 = weight(_text_:a in 3506) [ClassicSimilarity], result of:
              0.012037285 = score(doc=3506,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.25222903 = fieldWeight in 3506, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  20. Boyer, R.; Moore, S.: ¬A fast string searching algorithm (1977) 0.00
    0.0030093212 = product of:
      0.0060186423 = sum of:
        0.0060186423 = product of:
          0.012037285 = sum of:
            0.012037285 = weight(_text_:a in 3507) [ClassicSimilarity], result of:
              0.012037285 = score(doc=3507,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.25222903 = fieldWeight in 3507, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3507)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Types

  • a 303
  • m 7
  • el 6
  • s 3
  • p 2
  • r 1
  • More… Less…