Search (11 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  1. Borlund, P.: Evaluation of interactive information retrieval systems (2000) 0.10
    0.10280962 = product of:
      0.27415898 = sum of:
        0.12711786 = weight(_text_:storage in 2556) [ClassicSimilarity], result of:
          0.12711786 = score(doc=2556,freq=4.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.68110555 = fieldWeight in 2556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.08310561 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
          0.08310561 = score(doc=2556,freq=18.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.8021017 = fieldWeight in 2556, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
        0.06393551 = weight(_text_:systems in 2556) [ClassicSimilarity], result of:
          0.06393551 = score(doc=2556,freq=10.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.6073894 = fieldWeight in 2556, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=2556)
      0.375 = coord(3/8)
    
    LCSH
    Information storage and retrieval systems / Evaluation
    Interactive computer systems / Evaluation
    RSWK
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Subject
    Information Retrieval / Datenbankverwaltung / Hochschulschrift (GBV)
    Information Retrieval / Dialogsystem (SWB)
    Information Retrieval / Dialogsystem / Leistungsbewertung (BVB)
    Information storage and retrieval systems / Evaluation
    Interactive computer systems / Evaluation
  2. Keen, E.M.: Some aspects of proximity searching in text retrieval systems (1992) 0.06
    0.06356199 = product of:
      0.16949864 = sum of:
        0.089885905 = weight(_text_:storage in 6190) [ClassicSimilarity], result of:
          0.089885905 = score(doc=6190,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.48161435 = fieldWeight in 6190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0625 = fieldNorm(doc=6190)
        0.039176363 = weight(_text_:retrieval in 6190) [ClassicSimilarity], result of:
          0.039176363 = score(doc=6190,freq=4.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.37811437 = fieldWeight in 6190, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=6190)
        0.04043637 = weight(_text_:systems in 6190) [ClassicSimilarity], result of:
          0.04043637 = score(doc=6190,freq=4.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.38414678 = fieldWeight in 6190, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0625 = fieldNorm(doc=6190)
      0.375 = coord(3/8)
    
    Abstract
    Describes and evaluates the proximity search facilities in external online systems and in-house retrieval software. Discusses and illustrates capabilities, syntax and circumstances of use. Presents measurements of the overheads required by proximity for storage, record input time and search time. The search strategy narrowing effect of proximity is illustrated by recall and precision test results. Usage and problems lead to a number of design ideas for better implementation: some based on existing Boolean strategies, one on the use of weighted proximity to automatically produce ranked output. A comparison of Boolean, quorum and proximate term pairs distance is included
  3. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.06
    0.05866115 = product of:
      0.15642974 = sum of:
        0.089885905 = weight(_text_:storage in 3087) [ClassicSimilarity], result of:
          0.089885905 = score(doc=3087,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.48161435 = fieldWeight in 3087, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.047981054 = weight(_text_:retrieval in 3087) [ClassicSimilarity], result of:
          0.047981054 = score(doc=3087,freq=6.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.46309367 = fieldWeight in 3087, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.018562771 = product of:
          0.037125543 = sum of:
            0.037125543 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.037125543 = score(doc=3087,freq=2.0), product of:
                0.119945176 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034252144 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  4. Lesk, M.E.; Salton, G.: Relevance assements and retrieval system evaluation (1969) 0.05
    0.054619618 = product of:
      0.14565231 = sum of:
        0.07865016 = weight(_text_:storage in 4151) [ClassicSimilarity], result of:
          0.07865016 = score(doc=4151,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.42141256 = fieldWeight in 4151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
        0.041983422 = weight(_text_:retrieval in 4151) [ClassicSimilarity], result of:
          0.041983422 = score(doc=4151,freq=6.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.40520695 = fieldWeight in 4151, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
        0.025018727 = weight(_text_:systems in 4151) [ClassicSimilarity], result of:
          0.025018727 = score(doc=4151,freq=2.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.23767869 = fieldWeight in 4151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
      0.375 = coord(3/8)
    
    Abstract
    Two widerly used criteria for evaluating the effectiveness of information retrieval systems are, respectively, the recall and the precision. Since the determiniation of these measures is dependent on a distinction between documents which are relevant to a given query and documents which are not relevant to that query, it has sometimes been claimed that an accurate, generally valid evaluation cannot be based on recall and precision measure. A study was made to determine the effect of variations in relevance assesments do not produce significant variations in average recall and precision. It thus appears that properly computed recall and precision data may represent effectiveness indicators which are gemerally valid for many distinct user classes.
    Source
    Information storage and retrieval. 4(1969), S.343-359
  5. Spink, A.; Goodrum, A.: ¬A study of search intermediary working notes : implications for IR system design (1996) 0.05
    0.051730573 = product of:
      0.1379482 = sum of:
        0.07865016 = weight(_text_:storage in 6981) [ClassicSimilarity], result of:
          0.07865016 = score(doc=6981,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.42141256 = fieldWeight in 6981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
        0.034279317 = weight(_text_:retrieval in 6981) [ClassicSimilarity], result of:
          0.034279317 = score(doc=6981,freq=4.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.33085006 = fieldWeight in 6981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
        0.025018727 = weight(_text_:systems in 6981) [ClassicSimilarity], result of:
          0.025018727 = score(doc=6981,freq=2.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.23767869 = fieldWeight in 6981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6981)
      0.375 = coord(3/8)
    
    Abstract
    Reports findings from an explanatory study investigating working notes created during encoding and external storage (EES) processes, by human search intermediaries using a Boolean information retrieval systems. Analysis of 221 sets of working notes created by human search intermediaries revealed extensive use of EES processes and the creation of working notes of textual, numerical and graphical entities. Nearly 70% of recorded working noted were textual/numerical entities, nearly 30 were graphical entities and 0,73% were indiscernible. Segmentation devices were also used in 48% of the working notes. The creation of working notes during the EES processes was a fundamental element within the mediated, interactive information retrieval process. Discusses implications for the design of interfaces to support users' EES processes and further research
  6. Good, I.J.: ¬The decision-theory approach to the evaluation of information-retrieval systems (1967) 0.05
    0.047671493 = product of:
      0.12712398 = sum of:
        0.067414425 = weight(_text_:storage in 4154) [ClassicSimilarity], result of:
          0.067414425 = score(doc=4154,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.36121076 = fieldWeight in 4154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.046875 = fieldNorm(doc=4154)
        0.029382274 = weight(_text_:retrieval in 4154) [ClassicSimilarity], result of:
          0.029382274 = score(doc=4154,freq=4.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.2835858 = fieldWeight in 4154, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4154)
        0.030327275 = weight(_text_:systems in 4154) [ClassicSimilarity], result of:
          0.030327275 = score(doc=4154,freq=4.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.28811008 = fieldWeight in 4154, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4154)
      0.375 = coord(3/8)
    
    Abstract
    It is argued that the evaluation of information-retrieval systems should ultimately be based on the principle of rationality, the maximization of expected utility. In full generality this would involve an estimation of both the cost and value of a system, but the emphasis in this paper is on the problem of value, in terms of which the effiency of the system could be defined. One implication of the discussion is that it is not legitimate to superimpose the 2x2 contingency tables that refer to select/discarded and relevant/irrelevant, correspondending to each request,but it might be all right to superimpose them after applying a monotonic function to the entries. In particular, it is questionable whether a useful statistic is the ratio of the total number of relevant selected documents to the total number of relevant ones, over a sample of requests.
    Source
    Information storage review. 3(1967), S.31-34
  7. TREC: experiment and evaluation in information retrieval (2005) 0.04
    0.039905693 = product of:
      0.10641518 = sum of:
        0.03972433 = weight(_text_:storage in 636) [ClassicSimilarity], result of:
          0.03972433 = score(doc=636,freq=4.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.21284549 = fieldWeight in 636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.051214527 = weight(_text_:retrieval in 636) [ClassicSimilarity], result of:
          0.051214527 = score(doc=636,freq=70.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.49430186 = fieldWeight in 636, product of:
              8.3666 = tf(freq=70.0), with freq of:
                70.0 = termFreq=70.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.015476325 = weight(_text_:systems in 636) [ClassicSimilarity], result of:
          0.015476325 = score(doc=636,freq=6.0), product of:
            0.10526281 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.034252144 = queryNorm
            0.14702557 = fieldWeight in 636, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
      0.375 = coord(3/8)
    
    Abstract
    The Text REtrieval Conference (TREC), a yearly workshop hosted by the US government's National Institute of Standards and Technology, provides the infrastructure necessary for large-scale evaluation of text retrieval methodologies. With the goal of accelerating research in this area, TREC created the first large test collections of full-text documents and standardized retrieval evaluation. The impact has been significant; since TREC's beginning in 1992, retrieval effectiveness has approximately doubled. TREC has built a variety of large test collections, including collections for such specialized retrieval tasks as cross-language retrieval and retrieval of speech. Moreover, TREC has accelerated the transfer of research ideas into commercial systems, as demonstrated in the number of retrieval techniques developed in TREC that are now used in Web search engines. This book provides a comprehensive review of TREC research, summarizing the variety of TREC results, documenting the best practices in experimental information retrieval, and suggesting areas for further research. The first part of the book describes TREC's history, test collections, and retrieval methodology. Next, the book provides "track" reports -- describing the evaluations of specific tasks, including routing and filtering, interactive retrieval, and retrieving noisy text. The final part of the book offers perspectives on TREC from such participants as Microsoft Research, University of Massachusetts, Cornell University, University of Waterloo, City University of New York, and IBM. The book will be of interest to researchers in information retrieval and related technologies, including natural language processing.
    Content
    Enthält die Beiträge: 1. The Text REtrieval Conference - Ellen M. Voorhees and Donna K. Harman 2. The TREC Test Collections - Donna K. Harman 3. Retrieval System Evaluation - Chris Buckley and Ellen M. Voorhees 4. The TREC Ad Hoc Experiments - Donna K. Harman 5. Routing and Filtering - Stephen Robertson and Jamie Callan 6. The TREC Interactive Tracks: Putting the User into Search - Susan T. Dumais and Nicholas J. Belkin 7. Beyond English - Donna K. Harman 8. Retrieving Noisy Text - Ellen M. Voorhees and John S. Garofolo 9.The Very Large Collection and Web Tracks - David Hawking and Nick Craswell 10. Question Answering in TREC - Ellen M. Voorhees 11. The University of Massachusetts and a Dozen TRECs - James Allan, W. Bruce Croft and Jamie Callan 12. How Okapi Came to TREC - Stephen Robertson 13. The SMART Project at TREC - Chris Buckley 14. Ten Years of Ad Hoc Retrieval at TREC Using PIRCS - Kui-Lam Kwok 15. MultiText Experiments for TREC - Gordon V. Cormack, Charles L. A. Clarke, Christopher R. Palmer and Thomas R. Lynam 16. A Language-Modeling Approach to TREC - Djoerd Hiemstra and Wessel Kraaij 17. BM Research Activities at TREC - Eric W. Brown, David Carmel, Martin Franz, Abraham Ittycheriah, Tapas Kanungo, Yoelle Maarek, J. Scott McCarley, Robert L. Mack, John M. Prager, John R. Smith, Aya Soffer, Jason Y. Zien and Alan D. Marwick Epilogue: Metareflections on TREC - Karen Sparck Jones
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
    LCSH
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Kongress / Information Retrieval / Kongress (GBV)
    Information storage and retrieval systems / Congresses
    Text REtrieval Conference
  8. Kelledy, F.; Smeaton, A.F.: Thresholding the postings lists in information retrieval : experiments on TREC data (1995) 0.03
    0.03450592 = product of:
      0.13802367 = sum of:
        0.07865016 = weight(_text_:storage in 5804) [ClassicSimilarity], result of:
          0.07865016 = score(doc=5804,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.42141256 = fieldWeight in 5804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
        0.05937352 = weight(_text_:retrieval in 5804) [ClassicSimilarity], result of:
          0.05937352 = score(doc=5804,freq=12.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.5730491 = fieldWeight in 5804, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5804)
      0.25 = coord(2/8)
    
    Abstract
    A variety of methods for speeding up the response time of information retrieval processes have been put forward, one of which is the idea of thresholding. Thresholding relies on the data in information retrieval storage structures being organised to allow cut-off points to be used during processing. These cut-off points or thresholds are designed and ised to reduce the amount of information processed and to maintain the quality or minimise the degradation of response to a user's query. TREC is an annual series of benchmarking exercises to compare indexing and retrieval techniques. Reports experiments with a portion of the TREC data where features are introduced into the retrieval process to improve response time. These features improve response time while maintaining the same level of retrieval effectiveness
  9. ¬The Fourth Text Retrieval Conference (TREC-4) (1996) 0.03
    0.03446674 = product of:
      0.13786696 = sum of:
        0.089885905 = weight(_text_:storage in 7521) [ClassicSimilarity], result of:
          0.089885905 = score(doc=7521,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.48161435 = fieldWeight in 7521, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0625 = fieldNorm(doc=7521)
        0.047981054 = weight(_text_:retrieval in 7521) [ClassicSimilarity], result of:
          0.047981054 = score(doc=7521,freq=6.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.46309367 = fieldWeight in 7521, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=7521)
      0.25 = coord(2/8)
    
    Abstract
    Proceedings of the 4th TREC-Conference held in Gaithersburg, MD, Nov 1-3, 1995. Aim of the conference was discussion on retrieval techniques for large trest collections. different research groups used different techniques, such as automatic thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief descriptions including timing and storage information
  10. ¬The Sixth Text Retrieval Conference (TREC-6) (1998) 0.03
    0.03446674 = product of:
      0.13786696 = sum of:
        0.089885905 = weight(_text_:storage in 4476) [ClassicSimilarity], result of:
          0.089885905 = score(doc=4476,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.48161435 = fieldWeight in 4476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
        0.047981054 = weight(_text_:retrieval in 4476) [ClassicSimilarity], result of:
          0.047981054 = score(doc=4476,freq=6.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.46309367 = fieldWeight in 4476, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4476)
      0.25 = coord(2/8)
    
    Abstract
    Proceedings of the 6th TREC-confrerence held in Gaithersburgh, Maryland, Nov 19-21, 1997. Aim of the conference was discussion on retrieval techniques for large test collections. 51 research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  11. Salton, G.: Thoughts about modern retrieval technologies (1988) 0.03
    0.025722325 = product of:
      0.1028893 = sum of:
        0.07865016 = weight(_text_:storage in 1522) [ClassicSimilarity], result of:
          0.07865016 = score(doc=1522,freq=2.0), product of:
            0.1866346 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.034252144 = queryNorm
            0.42141256 = fieldWeight in 1522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
        0.024239138 = weight(_text_:retrieval in 1522) [ClassicSimilarity], result of:
          0.024239138 = score(doc=1522,freq=2.0), product of:
            0.10360982 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.034252144 = queryNorm
            0.23394634 = fieldWeight in 1522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
      0.25 = coord(2/8)
    
    Abstract
    Paper presented at the 30th Annual Conference of the National Federation of Astracting and Information Services, Philadelphia, 28 Feb-2 Mar 88. In recent years, the amount and the variety of available machine-readable data, new technologies have been introduced, such as high density storage devices, and fancy graphic displays useful for information transformation and access. New approaches have also been considered for processing the stored data based on the construction of knowledge bases representing the contents and structure of the information, and the use of expert system techniques to control the user-system interactions. Provides a brief evaluation of the new information processing technologies, and of the software methods proposed for information manipulation.