Search (17 results, page 1 of 1)

  • × year_i:[1980 TO 1990}
  • × theme_ss:"Retrievalstudien"
  1. Peritz, B.C.: On the informativeness of titles (1984) 0.01
    0.0067291465 = product of:
      0.047104023 = sum of:
        0.023552012 = weight(_text_:classification in 2636) [ClassicSimilarity], result of:
          0.023552012 = score(doc=2636,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 2636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2636)
        0.023552012 = weight(_text_:classification in 2636) [ClassicSimilarity], result of:
          0.023552012 = score(doc=2636,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.24630459 = fieldWeight in 2636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2636)
      0.14285715 = coord(2/14)
    
    Source
    International classification. 11(1984) no.2, S.87-89
  2. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.01
    0.0062775663 = product of:
      0.04394296 = sum of:
        0.029704956 = weight(_text_:subject in 5001) [ClassicSimilarity], result of:
          0.029704956 = score(doc=5001,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 5001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.014238005 = product of:
          0.02847601 = sum of:
            0.02847601 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.02847601 = score(doc=5001,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  3. Blair, D.C.: Full text retrieval : Evaluation and implications (1986) 0.01
    0.00576784 = product of:
      0.04037488 = sum of:
        0.02018744 = weight(_text_:classification in 2047) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2047,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2047)
        0.02018744 = weight(_text_:classification in 2047) [ClassicSimilarity], result of:
          0.02018744 = score(doc=2047,freq=2.0), product of:
            0.09562149 = queryWeight, product of:
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.03002521 = queryNorm
            0.21111822 = fieldWeight in 2047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1847067 = idf(docFreq=4974, maxDocs=44218)
              0.046875 = fieldNorm(doc=2047)
      0.14285715 = coord(2/14)
    
    Source
    International classification. 13(1986), S.18-23
  4. Dillon, M.: Enhanced bibliographic record retrieval experiments (1989) 0.01
    0.005745948 = product of:
      0.08044326 = sum of:
        0.08044326 = weight(_text_:bibliographic in 1406) [ClassicSimilarity], result of:
          0.08044326 = score(doc=1406,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.68819946 = fieldWeight in 1406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.125 = fieldNorm(doc=1406)
      0.071428575 = coord(1/14)
    
  5. Kilgour, F.G.: Retrieval on information from computerized book texts (1989) 0.00
    0.004273037 = product of:
      0.059822515 = sum of:
        0.059822515 = product of:
          0.11964503 = sum of:
            0.11964503 = weight(_text_:texts in 2965) [ClassicSimilarity], result of:
              0.11964503 = score(doc=2965,freq=2.0), product of:
                0.16460659 = queryWeight, product of:
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.03002521 = queryNorm
                0.72685444 = fieldWeight in 2965, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4822793 = idf(docFreq=499, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2965)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
  6. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.00
    0.002872974 = product of:
      0.04022163 = sum of:
        0.04022163 = weight(_text_:bibliographic in 2494) [ClassicSimilarity], result of:
          0.04022163 = score(doc=2494,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.34409973 = fieldWeight in 2494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=2494)
      0.071428575 = coord(1/14)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.
  7. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.00
    0.002872974 = product of:
      0.04022163 = sum of:
        0.04022163 = weight(_text_:bibliographic in 3566) [ClassicSimilarity], result of:
          0.04022163 = score(doc=3566,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.34409973 = fieldWeight in 3566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
      0.071428575 = coord(1/14)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
  8. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.00
    0.0021547303 = product of:
      0.030166224 = sum of:
        0.030166224 = weight(_text_:bibliographic in 3560) [ClassicSimilarity], result of:
          0.030166224 = score(doc=3560,freq=2.0), product of:
            0.11688946 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03002521 = queryNorm
            0.2580748 = fieldWeight in 3560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3560)
      0.071428575 = coord(1/14)
    
  9. MacCain, K.W.; White, H.D.; Griffith, B.C.: Comparing retrieval performance in online data bases (1987) 0.00
    0.0021433241 = product of:
      0.030006537 = sum of:
        0.030006537 = weight(_text_:subject in 1167) [ClassicSimilarity], result of:
          0.030006537 = score(doc=1167,freq=4.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27942157 = fieldWeight in 1167, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1167)
      0.071428575 = coord(1/14)
    
    Abstract
    This study systematically compares retrievals on 11 topics across five well-known data bases, with MEDLINE's subject indexing as a focus. Each topic was posed by a researcher in the medical behavioral sciences. Each was searches in MEDLINE, EXCERPTA MEDICA, and PSYCHINFO, which permit descriptor searches, and in SCISEARCH and SOCIAL SCISEARCH, which express topics through cited references. Searches on each topic were made with (1) descriptors, (2) cited references, and (3) natural language (a capabiblity common to all five data bases). The researchers who posed the topics judged the results. In every case, the set of records judged relevant was used to to calculate recall, precision, and novelty ratios. Overall, MEDLINE had the highest recall percentage (37%), followed by SSCI (31%). All searches resulted in high precision ratios; novelty ratios of data bases and searches varied widely. Differences in record format among data bases affected the success of the natural language retrievals. Some 445 documents judged relevant were not retrieved from MEDLINE using its descriptors; they were found in MEDLINE through natural language or in an alternative data base. An analysis was performed to examine possible faults in MEDLINE subject indexing as the reason for their nonretrieval. However, no patterns of indexing failure could be seen in those documents subsequently found in MEDLINE through known-item searches. Documents not found in MEDLINE primarily represent failures of coverage - articles were from nonindexed or selectively indexed journals
  10. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.00
    0.0021217826 = product of:
      0.029704956 = sum of:
        0.029704956 = weight(_text_:subject in 5517) [ClassicSimilarity], result of:
          0.029704956 = score(doc=5517,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.27661324 = fieldWeight in 5517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
      0.071428575 = coord(1/14)
    
  11. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.00
    0.0021000204 = product of:
      0.029400283 = sum of:
        0.029400283 = weight(_text_:subject in 3643) [ClassicSimilarity], result of:
          0.029400283 = score(doc=3643,freq=6.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.2737761 = fieldWeight in 3643, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
      0.071428575 = coord(1/14)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  12. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.00
    0.0020340008 = product of:
      0.02847601 = sum of:
        0.02847601 = product of:
          0.05695202 = sum of:
            0.05695202 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.05695202 = score(doc=262,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    20.10.2000 12:22:23
  13. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.00
    0.0018186709 = product of:
      0.02546139 = sum of:
        0.02546139 = weight(_text_:subject in 2288) [ClassicSimilarity], result of:
          0.02546139 = score(doc=2288,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 2288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.071428575 = coord(1/14)
    
    Abstract
    A pilot study on the relative retrieval effectiveness of semantic relevance (by terms) and pragmatic relevance (by citations) is reported. A single database has been constructed to provide access by both descriptors and cited references. For each question from a set of queries, two equivalent sets were retrieved. All retrieved items were evaluated by subject experts for relevance to their originating queries. We conclude that there are essentially two types of relevance at work resulting in two different sets of documents. Using both search methods to create a union set is likely to increase recall. Those few retrieved by the intersection of the two methods tend to result in higher precision. Suggestions are made to develop a front-end system to display the overlapping items for higher precision and to manipulate and rank the union set sets retrieved by the two search modes for improved output
  14. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.00
    0.0018186709 = product of:
      0.02546139 = sum of:
        0.02546139 = weight(_text_:subject in 2290) [ClassicSimilarity], result of:
          0.02546139 = score(doc=2290,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.23709705 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.071428575 = coord(1/14)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
  15. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.00
    0.0014528577 = product of:
      0.020340007 = sum of:
        0.020340007 = product of:
          0.040680014 = sum of:
            0.040680014 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.040680014 = score(doc=2417,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Pages
    S.22-25
  16. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.00
    0.0012124473 = product of:
      0.016974261 = sum of:
        0.016974261 = weight(_text_:subject in 3649) [ClassicSimilarity], result of:
          0.016974261 = score(doc=3649,freq=2.0), product of:
            0.10738805 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03002521 = queryNorm
            0.15806471 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
      0.071428575 = coord(1/14)
    
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  17. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.00
    8.7171455E-4 = product of:
      0.0122040035 = sum of:
        0.0122040035 = product of:
          0.024408007 = sum of:
            0.024408007 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.024408007 = score(doc=3564,freq=2.0), product of:
                0.10514317 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03002521 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    9. 1.1996 10:22:31