Search (11 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[1980 TO 1990}
  1. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.03
    0.03311137 = product of:
      0.06622274 = sum of:
        0.0506827 = weight(_text_:data in 3564) [ClassicSimilarity], result of:
          0.0506827 = score(doc=3564,freq=8.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 3564, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.031080082 = score(doc=3564,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  2. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.01
    0.014630836 = product of:
      0.058523346 = sum of:
        0.058523346 = weight(_text_:data in 2494) [ClassicSimilarity], result of:
          0.058523346 = score(doc=2494,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.48408815 = fieldWeight in 2494, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2494)
      0.25 = coord(1/4)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.
  3. MacCain, K.W.; White, H.D.; Griffith, B.C.: Comparing retrieval performance in online data bases (1987) 0.01
    0.012931953 = product of:
      0.051727813 = sum of:
        0.051727813 = weight(_text_:data in 1167) [ClassicSimilarity], result of:
          0.051727813 = score(doc=1167,freq=12.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4278775 = fieldWeight in 1167, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1167)
      0.25 = coord(1/4)
    
    Abstract
    This study systematically compares retrievals on 11 topics across five well-known data bases, with MEDLINE's subject indexing as a focus. Each topic was posed by a researcher in the medical behavioral sciences. Each was searches in MEDLINE, EXCERPTA MEDICA, and PSYCHINFO, which permit descriptor searches, and in SCISEARCH and SOCIAL SCISEARCH, which express topics through cited references. Searches on each topic were made with (1) descriptors, (2) cited references, and (3) natural language (a capabiblity common to all five data bases). The researchers who posed the topics judged the results. In every case, the set of records judged relevant was used to to calculate recall, precision, and novelty ratios. Overall, MEDLINE had the highest recall percentage (37%), followed by SSCI (31%). All searches resulted in high precision ratios; novelty ratios of data bases and searches varied widely. Differences in record format among data bases affected the success of the natural language retrievals. Some 445 documents judged relevant were not retrieved from MEDLINE using its descriptors; they were found in MEDLINE through natural language or in an alternative data base. An analysis was performed to examine possible faults in MEDLINE subject indexing as the reason for their nonretrieval. However, no patterns of indexing failure could be seen in those documents subsequently found in MEDLINE through known-item searches. Documents not found in MEDLINE primarily represent failures of coverage - articles were from nonindexed or selectively indexed journals
  4. Sievert, M.E.; McKinin, E.J.; Slough, M.: ¬A comparison of indexing and full-text for the retrieval of clinical medical literature (1988) 0.01
    0.012670675 = product of:
      0.0506827 = sum of:
        0.0506827 = weight(_text_:data in 3563) [ClassicSimilarity], result of:
          0.0506827 = score(doc=3563,freq=8.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 3563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3563)
      0.25 = coord(1/4)
    
    Abstract
    The availability of two full text data bases in the clinical medical journal literature, MEDIS from Mead Data Central and CCML from BRS Information Technologies, provided an opportunity to compare the efficacy of the full text to the traditional, indexed system, MEDLINE for retrieval effectiveness. 100 searches were solicited from an academic health sciences library and the request were searched on all 3 data bases. The results were compared and preliminary analysis suggests that the full text data bases retrieve a greater number of relevant citations and MEDLINE achieves higher precision.
  5. Salton, G.: Thoughts about modern retrieval technologies (1988) 0.01
    0.010452774 = product of:
      0.041811097 = sum of:
        0.041811097 = weight(_text_:data in 1522) [ClassicSimilarity], result of:
          0.041811097 = score(doc=1522,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34584928 = fieldWeight in 1522, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
      0.25 = coord(1/4)
    
    Abstract
    Paper presented at the 30th Annual Conference of the National Federation of Astracting and Information Services, Philadelphia, 28 Feb-2 Mar 88. In recent years, the amount and the variety of available machine-readable data, new technologies have been introduced, such as high density storage devices, and fancy graphic displays useful for information transformation and access. New approaches have also been considered for processing the stored data based on the construction of knowledge bases representing the contents and structure of the information, and the use of expert system techniques to control the user-system interactions. Provides a brief evaluation of the new information processing technologies, and of the software methods proposed for information manipulation.
  6. Schabas, A.H.: Postcoordinate retrieval : a comparison of two retrieval languages (1982) 0.01
    0.008959521 = product of:
      0.035838082 = sum of:
        0.035838082 = weight(_text_:data in 1202) [ClassicSimilarity], result of:
          0.035838082 = score(doc=1202,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.25 = coord(1/4)
    
    Abstract
    This article reports on a comparison of the postcoordinate retrieval effectiveness of two indexing languages: LCSH and PRECIS. The effect of augmenting each with title words was also studies. The database for the study was over 15.000 UK MARC records. Users returned 5.326 relevant judgements for citations retrieved for 61 SDI profiles, representing a wide variety of subjects. Results are reported in terms of precision and relative recall. Pure/applied sciences data and social science data were analyzed separately. Cochran's significance tests for ratios were used to interpret the findings. Recall emerged as the more important measure discriminating the behavior of the two languages. Addition of title words was found to improve recall of both indexing languages significantly. A direct relationship was observed between recall and exhaustivity. For the social sciences searches, recalls from PRECIS alone and from PRECIS with title words were significantly higher than those from LCSH alone and from LCSH with title words, respectively. Corresponding comparisons for the pure/applied sciences searches revealed no significant differences
  7. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.01
    0.008959521 = product of:
      0.035838082 = sum of:
        0.035838082 = weight(_text_:data in 3560) [ClassicSimilarity], result of:
          0.035838082 = score(doc=3560,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 3560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3560)
      0.25 = coord(1/4)
    
    Abstract
    40 doctoral student were trained to search INSPEC or ERIC on DIALOG using either the Sci-Mate Menu or native commands. In comparison with 20 control subjects for whom a free search was performed by an intermediary, the experiment subjects were no less satisfied with their retrievals, which were fewer in number but higher in precision than the retrievals produced by the intermediaries. Use of the menu interface did not affect quality of retrieval or user satisfaction, although subjects instructed to use native commands required less training time and interacted more with the data bases than did subjects trained on the Sci-Mate Menu. INSPEC subjects placed a higher monetary value on their searches than did ERIC subjects, indicated that they would make more frequent use of ddata bases in the future, and interacted more with the data base.
  8. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.01
    0.008447117 = product of:
      0.03378847 = sum of:
        0.03378847 = weight(_text_:data in 3566) [ClassicSimilarity], result of:
          0.03378847 = score(doc=3566,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 3566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
      0.25 = coord(1/4)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
  9. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.01
    0.0064750174 = product of:
      0.02590007 = sum of:
        0.02590007 = product of:
          0.05180014 = sum of:
            0.05180014 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.05180014 = score(doc=2417,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.22-25
  10. Blair, D.C.: Full text retrieval : Evaluation and implications (1986) 0.01
    0.0063353376 = product of:
      0.02534135 = sum of:
        0.02534135 = weight(_text_:data in 2047) [ClassicSimilarity], result of:
          0.02534135 = score(doc=2047,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2096163 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2047)
      0.25 = coord(1/4)
    
    Abstract
    Recently, a detailed evaluation of a large, operational full-text document retrieval system was reported in the literature. Values of precision and recall were estimated usind traditional statistical sampling methods and blind evaluation procedures. The results of this evaluation demonstrated that the system tested was retrieving less then 20% of the relevant documents when the searchers believed it was retrieving over 75% of the relevant documents. This evaluation is described including some data not reported in the original article. Also discussed are the implications which this study has for how the subjects of documents should be represented, as well as the importance of rigorous retrieval evaluations for the furtherhance of information retrieval research
  11. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.00
    0.0045325123 = product of:
      0.01813005 = sum of:
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.0362601 = score(doc=5001,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 3.1996 13:22:21