Search (49 results, page 3 of 3)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1980 TO 1990}
  1. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 2290) [ClassicSimilarity], result of:
              0.007030784 = score(doc=2290,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 2290, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2290)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
    Type
    a
  2. Blair, D.C.: Full text retrieval : Evaluation and implications (1986) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 2047) [ClassicSimilarity], result of:
              0.007030784 = score(doc=2047,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 2047, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recently, a detailed evaluation of a large, operational full-text document retrieval system was reported in the literature. Values of precision and recall were estimated usind traditional statistical sampling methods and blind evaluation procedures. The results of this evaluation demonstrated that the system tested was retrieving less then 20% of the relevant documents when the searchers believed it was retrieving over 75% of the relevant documents. This evaluation is described including some data not reported in the original article. Also discussed are the implications which this study has for how the subjects of documents should be represented, as well as the importance of rigorous retrieval evaluations for the furtherhance of information retrieval research
    Type
    a
  3. Biebricher, P.; Fuhr, N.; Niewelt, B.: ¬Der AIR-Retrievaltest (1986) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 4040) [ClassicSimilarity], result of:
              0.006765375 = score(doc=4040,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 4040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4040)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  4. MacCain, K.W.; White, H.D.; Griffith, B.C.: Comparing retrieval performance in online data bases (1987) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1167) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1167,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1167, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1167)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study systematically compares retrievals on 11 topics across five well-known data bases, with MEDLINE's subject indexing as a focus. Each topic was posed by a researcher in the medical behavioral sciences. Each was searches in MEDLINE, EXCERPTA MEDICA, and PSYCHINFO, which permit descriptor searches, and in SCISEARCH and SOCIAL SCISEARCH, which express topics through cited references. Searches on each topic were made with (1) descriptors, (2) cited references, and (3) natural language (a capabiblity common to all five data bases). The researchers who posed the topics judged the results. In every case, the set of records judged relevant was used to to calculate recall, precision, and novelty ratios. Overall, MEDLINE had the highest recall percentage (37%), followed by SSCI (31%). All searches resulted in high precision ratios; novelty ratios of data bases and searches varied widely. Differences in record format among data bases affected the success of the natural language retrievals. Some 445 documents judged relevant were not retrieved from MEDLINE using its descriptors; they were found in MEDLINE through natural language or in an alternative data base. An analysis was performed to examine possible faults in MEDLINE subject indexing as the reason for their nonretrieval. However, no patterns of indexing failure could be seen in those documents subsequently found in MEDLINE through known-item searches. Documents not found in MEDLINE primarily represent failures of coverage - articles were from nonindexed or selectively indexed journals
    Type
    a
  5. Salton, G.: Thoughts about modern retrieval technologies (1988) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 1522) [ClassicSimilarity], result of:
              0.00669738 = score(doc=1522,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 1522, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1522)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Paper presented at the 30th Annual Conference of the National Federation of Astracting and Information Services, Philadelphia, 28 Feb-2 Mar 88. In recent years, the amount and the variety of available machine-readable data, new technologies have been introduced, such as high density storage devices, and fancy graphic displays useful for information transformation and access. New approaches have also been considered for processing the stored data based on the construction of knowledge bases representing the contents and structure of the information, and the use of expert system techniques to control the user-system interactions. Provides a brief evaluation of the new information processing technologies, and of the software methods proposed for information manipulation.
    Type
    a
  6. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 3560) [ClassicSimilarity], result of:
              0.005740611 = score(doc=3560,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 3560, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    40 doctoral student were trained to search INSPEC or ERIC on DIALOG using either the Sci-Mate Menu or native commands. In comparison with 20 control subjects for whom a free search was performed by an intermediary, the experiment subjects were no less satisfied with their retrievals, which were fewer in number but higher in precision than the retrievals produced by the intermediaries. Use of the menu interface did not affect quality of retrieval or user satisfaction, although subjects instructed to use native commands required less training time and interacted more with the data bases than did subjects trained on the Sci-Mate Menu. INSPEC subjects placed a higher monetary value on their searches than did ERIC subjects, indicated that they would make more frequent use of ddata bases in the future, and interacted more with the data base.
  7. Sievert, M.E.; McKinin, E.J.; Slough, M.: ¬A comparison of indexing and full-text for the retrieval of clinical medical literature (1988) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 3563) [ClassicSimilarity], result of:
              0.005740611 = score(doc=3563,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 3563, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The availability of two full text data bases in the clinical medical journal literature, MEDIS from Mead Data Central and CCML from BRS Information Technologies, provided an opportunity to compare the efficacy of the full text to the traditional, indexed system, MEDLINE for retrieval effectiveness. 100 searches were solicited from an academic health sciences library and the request were searched on all 3 data bases. The results were compared and preliminary analysis suggests that the full text data bases retrieve a greater number of relevant citations and MEDLINE achieves higher precision.
  8. Prasher, R.G.: Evaluation of indexing system (1989) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 4998) [ClassicSimilarity], result of:
              0.0054123 = score(doc=4998,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 4998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4998)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Jochum, F.; Weissmann, V.: Struktur und Elemente des Information Retrieval Experiments (1985) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 114) [ClassicSimilarity], result of:
              0.0054123 = score(doc=114,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 114, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=114)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a