Search (428 results, page 1 of 22)

  • × theme_ss:"Retrievalstudien"
  1. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.10
    0.104680784 = product of:
      0.13957438 = sum of:
        0.0050284253 = weight(_text_:e in 2339) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=2339,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.053581126 = weight(_text_:et in 2339) [ClassicSimilarity], result of:
          0.053581126 = score(doc=2339,freq=2.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.2591991 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.08096482 = sum of:
          0.0511189 = weight(_text_:al in 2339) [ClassicSimilarity], result of:
            0.0511189 = score(doc=2339,freq=2.0), product of:
              0.20191248 = queryWeight, product of:
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.0440575 = queryNorm
              0.25317356 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.582931 = idf(docFreq=1228, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.029845916 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.029845916 = score(doc=2339,freq=2.0), product of:
              0.15428185 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0440575 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.75 = coord(3/4)
    
    Date
    22. 9.1997 19:16:05
    Language
    e
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  2. Gilchrist, A.: Research and consultancy (1998) 0.10
    0.101002805 = product of:
      0.1346704 = sum of:
        0.00804548 = weight(_text_:e in 1394) [ClassicSimilarity], result of:
          0.00804548 = score(doc=1394,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.12704675 = fieldWeight in 1394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0625 = fieldNorm(doc=1394)
        0.0857298 = weight(_text_:et in 1394) [ClassicSimilarity], result of:
          0.0857298 = score(doc=1394,freq=2.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.41471857 = fieldWeight in 1394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1394)
        0.04089512 = product of:
          0.08179024 = sum of:
            0.08179024 = weight(_text_:al in 1394) [ClassicSimilarity], result of:
              0.08179024 = score(doc=1394,freq=2.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.4050777 = fieldWeight in 1394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1394)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Language
    e
    Source
    Library and information work worldwide 1998. Ed.: M.B. Line et al
  3. Chevallet, J.-P.; Bruandet, M.F.: Impact de l'utilisation de multi terms sur la qualité des résponses dùn système de recherche d'information a indexation automatique (1999) 0.09
    0.094691746 = product of:
      0.18938349 = sum of:
        0.14848837 = weight(_text_:et in 6253) [ClassicSimilarity], result of:
          0.14848837 = score(doc=6253,freq=6.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.7183137 = fieldWeight in 6253, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0625 = fieldNorm(doc=6253)
        0.04089512 = product of:
          0.08179024 = sum of:
            0.08179024 = weight(_text_:al in 6253) [ClassicSimilarity], result of:
              0.08179024 = score(doc=6253,freq=2.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.4050777 = fieldWeight in 6253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6253)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Series
    Collection travaux et recherches; UL3
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
  4. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.09
    0.089274704 = product of:
      0.11903294 = sum of:
        0.007111267 = weight(_text_:e in 6684) [ClassicSimilarity], result of:
          0.007111267 = score(doc=6684,freq=4.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.112294525 = fieldWeight in 6684, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6684)
        0.075775154 = weight(_text_:et in 6684) [ClassicSimilarity], result of:
          0.075775154 = score(doc=6684,freq=4.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.3665629 = fieldWeight in 6684, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6684)
        0.03614652 = product of:
          0.07229304 = sum of:
            0.07229304 = weight(_text_:al in 6684) [ClassicSimilarity], result of:
              0.07229304 = score(doc=6684,freq=4.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.3580415 = fieldWeight in 6684, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6684)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents
    Language
    e
  5. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2017) 0.09
    0.08771258 = product of:
      0.1169501 = sum of:
        0.0050284253 = weight(_text_:e in 3868) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=3868,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 3868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3868)
        0.075775154 = weight(_text_:et in 3868) [ClassicSimilarity], result of:
          0.075775154 = score(doc=3868,freq=4.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.3665629 = fieldWeight in 3868, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3868)
        0.03614652 = product of:
          0.07229304 = sum of:
            0.07229304 = weight(_text_:al in 3868) [ClassicSimilarity], result of:
              0.07229304 = score(doc=3868,freq=4.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.3580415 = fieldWeight in 3868, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3868)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This paper reports on an analysis of the loss levels that would result if a bibliographic database, namely the Australian Education Index (AEI), were missing the subject descriptors and identifiers assigned by its professional indexers, employing the methodology developed by Gross and Taylor (2005), and later by Gross et al. (2015). The results indicate that AEI users would lose a similar proportion of hits per query to that experienced by library catalog users: on average, 27% of the resources found by a sample of keyword queries on the AEI database would not have been found without the subject indexing, based on the Australian Thesaurus of Education Descriptors (ATED). The paper also discusses the methodological limitations of these studies, pointing out that real-life users might still find some of the resources missed by a particular query through follow-up searches, while additional resources might also be found through iterative searching on the subject vocabulary. The paper goes on to describe a new research design, based on a before - and - after experiment, which addresses some of these limitations. It is argued that this alternative design will provide a more realistic picture of the value that professionally assigned subject indexing and controlled subject vocabularies can add to literature searching of a more scholarly and thorough kind.
    Language
    e
  6. Hider, P.: ¬The search value added by professional indexing to a bibliographic database (2018) 0.09
    0.08771258 = product of:
      0.1169501 = sum of:
        0.0050284253 = weight(_text_:e in 4300) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=4300,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 4300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4300)
        0.075775154 = weight(_text_:et in 4300) [ClassicSimilarity], result of:
          0.075775154 = score(doc=4300,freq=4.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.3665629 = fieldWeight in 4300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4300)
        0.03614652 = product of:
          0.07229304 = sum of:
            0.07229304 = weight(_text_:al in 4300) [ClassicSimilarity], result of:
              0.07229304 = score(doc=4300,freq=4.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.3580415 = fieldWeight in 4300, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4300)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Gross et al. (2015) have demonstrated that about a quarter of hits would typically be lost to keyword searchers if contemporary academic library catalogs dropped their controlled subject headings. This article reports on an investigation of the search value that subject descriptors and identifiers assigned by professional indexers add to a bibliographic database, namely the Australian Education Index (AEI). First, a similar methodology to that developed by Gross et al. (2015) was applied, with keyword searches representing a range of educational topics run on the AEI database with and without its subject indexing. The results indicated that AEI users would also lose, on average, about a quarter of hits per query. Second, an alternative research design was applied in which an experienced literature searcher was asked to find resources on a set of educational topics on an AEI database stripped of its subject indexing and then asked to search for additional resources on the same topics after the subject indexing had been reinserted. In this study, the proportion of additional resources that would have been lost had it not been for the subject indexing was again found to be about a quarter of the total resources found for each topic, on average.
    Language
    e
  7. Cross-language information retrieval (1998) 0.07
    0.071073726 = product of:
      0.09476496 = sum of:
        0.0035556336 = weight(_text_:e in 6299) [ClassicSimilarity], result of:
          0.0035556336 = score(doc=6299,freq=4.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.056147262 = fieldWeight in 6299, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.05990552 = weight(_text_:et in 6299) [ClassicSimilarity], result of:
          0.05990552 = score(doc=6299,freq=10.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.28979343 = fieldWeight in 6299, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.01953125 = fieldNorm(doc=6299)
        0.031303804 = product of:
          0.06260761 = sum of:
            0.06260761 = weight(_text_:al in 6299) [ClassicSimilarity], result of:
              0.06260761 = score(doc=6299,freq=12.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.31007302 = fieldWeight in 6299, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=6299)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Enthält die Beiträge: GREFENSTETTE, G.: The Problem of Cross-Language Information Retrieval; DAVIS, M.W.: On the Effective Use of Large Parallel Corpora in Cross-Language Text Retrieval; BALLESTEROS, L. u. W.B. CROFT: Statistical Methods for Cross-Language Information Retrieval; Distributed Cross-Lingual Information Retrieval; Automatic Cross-Language Information Retrieval Using Latent Semantic Indexing; EVANS, D.A. u.a.: Mapping Vocabularies Using Latent Semantics; PICCHI, E. u. C. PETERS: Cross-Language Information Retrieval: A System for Comparable Corpus Querying; YAMABANA, K. u.a.: A Language Conversion Front-End for Cross-Language Information Retrieval; GACHOT, D.A. u.a.: The Systran NLP Browser: An Application of Machine Translation Technology in Cross-Language Information Retrieval; HULL, D.: A Weighted Boolean Model for Cross-Language Text Retrieval; SHERIDAN, P. u.a. Building a Large Multilingual Test Collection from Comparable News Documents; OARD; D.W. u. B.J. DORR: Evaluating Cross-Language Text Filtering Effectiveness
    Footnote
    Christian Fluhr at al (DIST/SMTI, France) outline the EMIR (European Multilingual Information Retrieval) and ESPRIT projects. They found that using SYSTRAN to machine translate queries and to access material from various multilingual databases produced less relevant results than a method referred to as 'multilingual reformulation' (the mechanics of which are only hinted at). An interesting technique is Latent Semantic Indexing (LSI), described by Michael Littman et al (Brown University) and, most clearly, by David Evans et al (Carnegie Mellon University). LSI involves creating matrices of documents and the terms they contain and 'fitting' related documents into a reduced matrix space. This effectively allows queries to be mapped onto a common semantic representation of the documents. Eugenio Picchi and Carol Peters (Pisa) report on a procedure to create links between translation equivalents in an Italian-English parallel corpus. The links are used to construct parallel linguistic contexts in real-time for any term or combination of terms that is being searched for in either language. Their interest is primarily lexicographic but they plan to apply the same procedure to comparable corpora, i.e. to texts which are not translations of each other but which share the same domain. Kiyoshi Yamabana et al (NEC, Japan) address the issue of how to disambiguate between alternative translations of query terms. Their DMAX (double maximise) method looks at co-occurrence frequencies between both source language words and target language words in order to arrive at the most probable translation. The statistical data for the decision are derived, not from the translation texts but independently from monolingual corpora in each language. An interactive user interface allows the user to influence the selection of terms during the matching process. Denis Gachot et al (SYSTRAN) describe the SYSTRAN NLP browser, a prototype tool which collects parsing information derived from a text or corpus previously translated with SYSTRAN. The user enters queries into the browser in either a structured or free form and receives grammatical and lexical information about the source text and/or its translation.
    The retrieved output from a query including the phrase 'big rockets' may be, for instance, a sentence containing 'giant rocket' which is semantically ranked above 'military ocket'. David Hull (Xerox Research Centre, Grenoble) describes an implementation of a weighted Boolean model for Spanish-English CLIR. Users construct Boolean-type queries, weighting each term in the query, which is then translated by an on-line dictionary before being applied to the database. Comparisons with the performance of unweighted free-form queries ('vector space' models) proved encouraging. Two contributions consider the evaluation of CLIR systems. In order to by-pass the time-consuming and expensive process of assembling a standard collection of documents and of user queries against which the performance of an CLIR system is manually assessed, Páriac Sheridan et al (ETH Zurich) propose a method based on retrieving 'seed documents'. This involves identifying a unique document in a database (the 'seed document') and, for a number of queries, measuring how fast it is retrieved. The authors have also assembled a large database of multilingual news documents for testing purposes. By storing the (fairly short) documents in a structured form tagged with descriptor codes (e.g. for topic, country and area), the test suite is easily expanded while remaining consistent for the purposes of testing. Douglas Ouard and Bonne Dorr (University of Maryland) describe an evaluation methodology which appears to apply LSI techniques in order to filter and rank incoming documents designed for testing CLIR systems. The volume provides the reader an excellent overview of several projects in CLIR. It is well supported with references and is intended as a secondary text for researchers and practitioners. It highlights the need for a good, general tutorial introduction to the field."
    Language
    e
  8. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.05
    0.050501402 = product of:
      0.0673352 = sum of:
        0.00402274 = weight(_text_:e in 3643) [ClassicSimilarity], result of:
          0.00402274 = score(doc=3643,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.063523374 = fieldWeight in 3643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
        0.0428649 = weight(_text_:et in 3643) [ClassicSimilarity], result of:
          0.0428649 = score(doc=3643,freq=2.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.20735928 = fieldWeight in 3643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
        0.02044756 = product of:
          0.04089512 = sum of:
            0.04089512 = weight(_text_:al in 3643) [ClassicSimilarity], result of:
              0.04089512 = score(doc=3643,freq=2.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.20253885 = fieldWeight in 3643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3643)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Language
    e
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  9. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.05
    0.050501402 = product of:
      0.0673352 = sum of:
        0.00402274 = weight(_text_:e in 3649) [ClassicSimilarity], result of:
          0.00402274 = score(doc=3649,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.063523374 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
        0.0428649 = weight(_text_:et in 3649) [ClassicSimilarity], result of:
          0.0428649 = score(doc=3649,freq=2.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.20735928 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
        0.02044756 = product of:
          0.04089512 = sum of:
            0.04089512 = weight(_text_:al in 3649) [ClassicSimilarity], result of:
              0.04089512 = score(doc=3649,freq=2.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.20253885 = fieldWeight in 3649, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3649)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Language
    e
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  10. Greisdorf, H.; O'Connor, B.: Nodes of topicality modeling user notions of on topic documents (2003) 0.03
    0.029304776 = product of:
      0.058609553 = sum of:
        0.0050284253 = weight(_text_:e in 5175) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=5175,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 5175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5175)
        0.053581126 = weight(_text_:et in 5175) [ClassicSimilarity], result of:
          0.053581126 = score(doc=5175,freq=2.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.2591991 = fieldWeight in 5175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5175)
      0.5 = coord(2/4)
    
    Abstract
    Griesdorf and O'Connor attempt to determine the aspects of a retrieved item that provide a questioner with evidence that the item is in fact on the topic searched independent of its relevance. To this end they collect data from 32 participants, 11 from the business community as well as 21 doctoral students at the University of North Texas each of whom were asked to state if they considered material that approaches a topic in each of 14 specific manners as " on topic" or "off topic." Chi-square indicates that the observed values are significantly different from expected values and the chi-square residuals for on topic judgements exceed plus or minus two in eight cases and plus two in five cases. The positive values which indicate a percentage of response greater than that from chance suggest that documents considered topical are only related to the problem at hand, contain terms that were in the query, and describe, explain or expand the topic of the query. The chi-square residuals for off topic judgements exceed plus or minus two in ten cases and plus two in four cases. The positive values suggest that documents considered not topical exhibit a contrasting, contrary, or confounding point of view, or merely spark curiosity. Such material might well be relevant, but is not judged topical. This suggests that topical appropriateness may best be achieved using the Bruza, et alia, left compositional monotonicity approach.
    Language
    e
  11. Radev, D.R.; Libner, K.; Fan, W.: Getting answers to natural language questions on the Web (2002) 0.03
    0.029304776 = product of:
      0.058609553 = sum of:
        0.0050284253 = weight(_text_:e in 5204) [ClassicSimilarity], result of:
          0.0050284253 = score(doc=5204,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.07940422 = fieldWeight in 5204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
        0.053581126 = weight(_text_:et in 5204) [ClassicSimilarity], result of:
          0.053581126 = score(doc=5204,freq=2.0), product of:
            0.20671801 = queryWeight, product of:
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0440575 = queryNorm
            0.2591991 = fieldWeight in 5204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.692005 = idf(docFreq=1101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.5 = coord(2/4)
    
    Abstract
    Seven hundred natural language questions from TREC-8 and TREC-9 were sent by Radev, Libner, and Fan to each of nine web search engines. The top 40 sites returned by each system were stored for evaluation of their productivity of correct answers. Each question per engine was scored as the sum of the reciprocal ranks of identified correct answers. The large number of zero scores gave a positive skew violating the normality assumption for ANOVA, so values were transformed to zero for no hit and one for one or more hits. The non-zero values were then square-root transformed to remove the remaining positive skew. Interactions were observed between search engine and answer type (name, place, date, et cetera), search engine and number of proper nouns in the query, search engine and the need for time limitation, and search engine and total query words. All effects were significant. Shortest queries had the highest mean scores. One or more proper nouns present provides a significant advantage. Non-time dependent queries have an advantage. Place, name, person, and text description had mean scores between .85 and .9 with date at .81 and number at .59. There were significant differences in score by search engine. Search engines found at least one correct answer in between 87.7 and 75.45 of the cases. Google and Northern Light were just short of a 90% hit rate. No evidence indicated that a particular engine was better at answering any particular sort of question.
    Language
    e
  12. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.03
    0.027931934 = product of:
      0.05586387 = sum of:
        0.014079589 = weight(_text_:e in 6418) [ClassicSimilarity], result of:
          0.014079589 = score(doc=6418,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.2223318 = fieldWeight in 6418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.109375 = fieldNorm(doc=6418)
        0.04178428 = product of:
          0.08356856 = sum of:
            0.08356856 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.08356856 = score(doc=6418,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Language
    e
    Source
    Online. 22(1998) no.6, S.57-58
  13. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.03
    0.027931934 = product of:
      0.05586387 = sum of:
        0.014079589 = weight(_text_:e in 6438) [ClassicSimilarity], result of:
          0.014079589 = score(doc=6438,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.2223318 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.04178428 = product of:
          0.08356856 = sum of:
            0.08356856 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08356856 = score(doc=6438,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    11. 8.2001 16:22:19
    Language
    e
  14. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.03
    0.027931934 = product of:
      0.05586387 = sum of:
        0.014079589 = weight(_text_:e in 5089) [ClassicSimilarity], result of:
          0.014079589 = score(doc=5089,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.2223318 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.04178428 = product of:
          0.08356856 = sum of:
            0.08356856 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.08356856 = score(doc=5089,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 18:43:54
    Language
    e
  15. Information retrieval experiment (1981) 0.02
    0.021411512 = product of:
      0.042823024 = sum of:
        0.0070397947 = weight(_text_:e in 2653) [ClassicSimilarity], result of:
          0.0070397947 = score(doc=2653,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.1111659 = fieldWeight in 2653, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2653)
        0.03578323 = product of:
          0.07156646 = sum of:
            0.07156646 = weight(_text_:al in 2653) [ClassicSimilarity], result of:
              0.07156646 = score(doc=2653,freq=2.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.35444298 = fieldWeight in 2653, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2653)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Language
    e
    Signature
    Al 11 Inf
  16. Al-Maskari, A.; Sanderson, M.: ¬A review of factors influencing user satisfaction in information retrieval (2010) 0.02
    0.021411512 = product of:
      0.042823024 = sum of:
        0.0070397947 = weight(_text_:e in 3447) [ClassicSimilarity], result of:
          0.0070397947 = score(doc=3447,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.1111659 = fieldWeight in 3447, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3447)
        0.03578323 = product of:
          0.07156646 = sum of:
            0.07156646 = weight(_text_:al in 3447) [ClassicSimilarity], result of:
              0.07156646 = score(doc=3447,freq=2.0), product of:
                0.20191248 = queryWeight, product of:
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0440575 = queryNorm
                0.35444298 = fieldWeight in 3447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.582931 = idf(docFreq=1228, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3447)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Language
    e
  17. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.019951383 = product of:
      0.039902765 = sum of:
        0.0100568505 = weight(_text_:e in 3103) [ClassicSimilarity], result of:
          0.0100568505 = score(doc=3103,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.15880844 = fieldWeight in 3103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.078125 = fieldNorm(doc=3103)
        0.029845916 = product of:
          0.05969183 = sum of:
            0.05969183 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.05969183 = score(doc=3103,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 2.1999 20:55:22
    Language
    e
  18. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.019951383 = product of:
      0.039902765 = sum of:
        0.0100568505 = weight(_text_:e in 3107) [ClassicSimilarity], result of:
          0.0100568505 = score(doc=3107,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.15880844 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
        0.029845916 = product of:
          0.05969183 = sum of:
            0.05969183 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.05969183 = score(doc=3107,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 2.1999 20:59:22
    Language
    e
  19. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.019951383 = product of:
      0.039902765 = sum of:
        0.0100568505 = weight(_text_:e in 2417) [ClassicSimilarity], result of:
          0.0100568505 = score(doc=2417,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.15880844 = fieldWeight in 2417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
        0.029845916 = product of:
          0.05969183 = sum of:
            0.05969183 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.05969183 = score(doc=2417,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Language
    e
    Pages
    S.22-25
  20. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.02
    0.015961107 = product of:
      0.031922214 = sum of:
        0.00804548 = weight(_text_:e in 5002) [ClassicSimilarity], result of:
          0.00804548 = score(doc=5002,freq=2.0), product of:
            0.063326925 = queryWeight, product of:
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0440575 = queryNorm
            0.12704675 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.43737 = idf(docFreq=28552, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.023876732 = product of:
          0.047753464 = sum of:
            0.047753464 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.047753464 = score(doc=5002,freq=2.0), product of:
                0.15428185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0440575 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    19. 3.1996 11:22:12
    Language
    e

Years

Languages

  • e 418
  • d 5
  • f 2
  • fi 1
  • m 1
  • More… Less…

Types

  • a 397
  • s 14
  • m 9
  • el 7
  • r 7
  • d 1
  • p 1
  • More… Less…