Search (6 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × type_ss:"s"
  1. Cross-language information retrieval (1998) 0.02
    0.021839045 = sum of:
      0.0074793627 = product of:
        0.02991745 = sum of:
          0.02991745 = weight(_text_:authors in 6299) [ClassicSimilarity], result of:
            0.02991745 = score(doc=6299,freq=2.0), product of:
              0.23758973 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052116565 = queryNorm
              0.12592064 = fieldWeight in 6299, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=6299)
        0.25 = coord(1/4)
      0.014359683 = product of:
        0.028719366 = sum of:
          0.028719366 = weight(_text_:g in 6299) [ClassicSimilarity], result of:
            0.028719366 = score(doc=6299,freq=4.0), product of:
              0.19574708 = queryWeight, product of:
                3.7559474 = idf(docFreq=2809, maxDocs=44218)
                0.052116565 = queryNorm
              0.1467167 = fieldWeight in 6299, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7559474 = idf(docFreq=2809, maxDocs=44218)
                0.01953125 = fieldNorm(doc=6299)
        0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Editor
    Grefenstette, G.
    Footnote
    The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
  2. Information retrieval experiment (1981) 0.01
    0.01421536 = product of:
      0.02843072 = sum of:
        0.02843072 = product of:
          0.05686144 = sum of:
            0.05686144 = weight(_text_:g in 2653) [ClassicSimilarity], result of:
              0.05686144 = score(doc=2653,freq=2.0), product of:
                0.19574708 = queryWeight, product of:
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.052116565 = queryNorm
                0.29048425 = fieldWeight in 2653, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7559474 = idf(docFreq=2809, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2653)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: ROBERTSON, S.E.: The methodology of information retrieval experiment; RIJSBERGEN, C.J. van: Retrieval effectiveness; BELKIN, N.: Ineffable concepts in information retrieval; TAGUE, J.M.: The pragmatics of information retrieval experimentation; LANCASTER, F.W.: Evaluation within the environment of an operating information service; BARRACLOUGH, E.D.: Opportunities for testing with online systems; KEEN, M.E.: Laboratory tests of manual systems; ODDY, R.N.: Laboratory tests: automatic systems; HEINE, M.D.: Simulation, and simulation experiments; COOPER, W.S.: Gedanken experimentation: an alternative to traditional system testing?; SPARCK JONES, K.: Actual tests - retrieval system tests; EVANS, L.: An experiment: search strategy variation in SDI profiles; SALTON, G.: The Smart environment for retrieval system evaluation - advantage and problem areas
  3. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.01
    0.01412215 = product of:
      0.0282443 = sum of:
        0.0282443 = product of:
          0.0564886 = sum of:
            0.0564886 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.0564886 = score(doc=3087,freq=2.0), product of:
                0.18250333 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052116565 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  4. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.01
    0.01412215 = product of:
      0.0282443 = sum of:
        0.0282443 = product of:
          0.0564886 = sum of:
            0.0564886 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.0564886 = score(doc=4049,freq=2.0), product of:
                0.18250333 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052116565 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  5. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.01
    0.010591612 = product of:
      0.021183224 = sum of:
        0.021183224 = product of:
          0.04236645 = sum of:
            0.04236645 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.04236645 = score(doc=3564,freq=2.0), product of:
                0.18250333 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052116565 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    9. 1.1996 10:22:31
  6. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0037396813 = product of:
      0.0074793627 = sum of:
        0.0074793627 = product of:
          0.02991745 = sum of:
            0.02991745 = weight(_text_:authors in 636) [ClassicSimilarity], result of:
              0.02991745 = score(doc=636,freq=2.0), product of:
                0.23758973 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052116565 = queryNorm
                0.12592064 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Footnote
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."