Search (5 results, page 1 of 1)

  • × theme_ss:"Retrievalstudien"
  • × theme_ss:"Volltextretrieval"
  • × year_i:[1990 TO 2000}
  1. Blair, D.C.; Maron, M.E.: Full-text information retrieval : further analysis and clarification (1990) 0.01
    0.0064842156 = product of:
      0.019452646 = sum of:
        0.019452646 = product of:
          0.058357935 = sum of:
            0.058357935 = weight(_text_:retrieval in 2046) [ClassicSimilarity], result of:
              0.058357935 = score(doc=2046,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.37811437 = fieldWeight in 2046, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2046)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In 1985, an article by Blair and Maron described a detailed evaluation of the effectiveness of an operational full text retrieval system used to support the defense of a large corporate lawsuit. The following year Salton published an article which called into question the conclusions of the 1985 study. The following article briefly reviews the initial study, replies to the objections raised by the secon article, and clarifies several confusions and misunderstandings of the 1985 study
  2. Wildemuth, B.M.: Measures of success in searching a full-text fact base (1990) 0.01
    0.0057112123 = product of:
      0.017133636 = sum of:
        0.017133636 = product of:
          0.051400907 = sum of:
            0.051400907 = weight(_text_:online in 2050) [ClassicSimilarity], result of:
              0.051400907 = score(doc=2050,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33194235 = fieldWeight in 2050, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2050)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The traditional measures of online searching proficiency (recall and precision) are less appropriate when applied to the searching of full text databases. The pilot study investigated and evaluated 5 measures of overall success in searching a full text data bank. Data was drawn from INQUIRER searches conducted by medical students at North Carolina Univ. at Chapel Hill. INQUIRER ia an online database of facts and concepts in microbiology. The 5 measures were: success/failure; precision; search term overlap; number of search cycles; and time per search. Concludes that the last 4 measures look promising for the evaluation of fact data bases such as ENQUIRER
  3. Turtle, H.; Flood, J.: Query evaluation : strategies and optimizations (1995) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 4087) [ClassicSimilarity], result of:
              0.041265294 = score(doc=4087,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 4087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4087)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Discusses the 2 major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. Discusses several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials
  4. Pirkola, A.; Jarvelin, K.: ¬The effect of anaphor and ellipsis resolution on proximity searching in a text database (1995) 0.00
    0.004052635 = product of:
      0.012157904 = sum of:
        0.012157904 = product of:
          0.03647371 = sum of:
            0.03647371 = weight(_text_:retrieval in 4088) [ClassicSimilarity], result of:
              0.03647371 = score(doc=4088,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 4088, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4088)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    So far, methods for ellipsis and anaphor resolution have been developed and the effects of anaphor resolution have been analyzed in the context of statistical information retrieval of scientific abstracts. No significant improvements has been observed. Analyzes the effects of ellipsis and anaphor resolution on proximity searching in a full text database. Anaphora and ellipsis are classified on the basis of the type of their correlates / antecedents rather than, as traditional, on the basis of their own linguistic type. The classification differentiates proper names and common nouns of basic words, compound words, and phrases. The study was carried out in a newspaper article database containing 55.000 full text articles. A set of 154 keyword pairs in different categories was created. Human resolution of keyword ellipsis and anaphora was performed to identify sentences and paragraphs which would match proximity searches after resolution. Findings indicate that ellipsis and anaphor resolution is most relevant for proper name phrases and only marginal in the other keyword categories. Therefore the recall effect of restricted resolution of proper name phrases only was analyzed for keyword pairs containing at least 1 proper name phrase. Findings indicate a recall increase of 38.2% in sentence searches, and 28.8% in paragraph searches when proper name ellipsis were resolved. The recall increase was 17.6% sentence searches, and 19.8% in paragraph searches when proper name anaphora were resolved. Some simple and computationally justifiable resolution method might be developed only for proper name phrases to support keyword based full text information retrieval. Discusses elements of such a method
  5. Voorbij, H.: Title keywords and subject descriptors : a comparison of subject search entries of books in the humanities and social sciences (1998) 0.00
    0.0028845975 = product of:
      0.008653793 = sum of:
        0.008653793 = product of:
          0.025961377 = sum of:
            0.025961377 = weight(_text_:online in 4721) [ClassicSimilarity], result of:
              0.025961377 = score(doc=4721,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 4721, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4721)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In order to compare the value of subject descriptors and title keywords as entries to subject searches, two studies were carried out. Both studies concentrated on monographs in the humanities and social sciences, held by the online public access catalogue of the National Library of the Netherlands. In the first study, a comparison was made by subject librarians between the subject descriptors and the title keywords of 475 records. They could express their opinion on a scale from 1 (descriptor is exactly or almost the same as word in title) to 7 (descriptor does not appear in title at all). It was concluded that 37 per cent of the records are considerably enhanced by a subject descriptor, and 49 per cent slightly or considerably enhanced. In the second study, subject librarians performed subject searches using title keywords and subject descriptors on the same topic. The relative recall amounted to 48 per cent and 86 per cent respectively. Failure analysis revealed the reasons why so many records that were found by subject descriptors were not found by title keywords. First, although completely meaningless titles hardly ever appear, the title of a publication does not always offer sufficient clues for title keyword searching. In those cases, descriptors may enhance the record of a publication. A second and even more important task of subject descriptors is controlling the vocabulary. Many relevant titles cannot be retrieved by title keyword searching because of the wide diversity of ways of expressing a topic. Descriptors take away the burden of vocabulary control from the user.