Search (33 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[1980 TO 1990}
  1. Peritz, B.C.: On the informativeness of titles (1984) 0.16
    0.1625918 = product of:
      0.2438877 = sum of:
        0.23217005 = weight(_text_:sociology in 2636) [ClassicSimilarity], result of:
          0.23217005 = score(doc=2636,freq=4.0), product of:
            0.30495512 = queryWeight, product of:
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.043811057 = queryNorm
            0.7613253 = fieldWeight in 2636, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.9606886 = idf(docFreq=113, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2636)
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 2636) [ClassicSimilarity], result of:
              0.0234353 = score(doc=2636,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 2636, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2636)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The frequency of non-informative titles of journal articles was assessed for two fields: library and information science and sociology. The percentage of non informative titles was 21% in the formaer and 15% in the latter. In both fields, the non-informative titles, were concentratein only a few journals. The non-informative titles in library science were derived mainly from non-research journals. IN sociology the reasons for non-informative titles may be more complex; some of these journals are highly cited. For the improvement of retrievaleffiency the adoption of a policy encouraging informative titles (as in journals of chemistry) is recommended.
  2. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.03
    0.026619852 = product of:
      0.079859555 = sum of:
        0.079859555 = sum of:
          0.020501617 = weight(_text_:of in 2417) [ClassicSimilarity], result of:
            0.020501617 = score(doc=2417,freq=6.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2992506 = fieldWeight in 2417, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.078125 = fieldNorm(doc=2417)
          0.059357934 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
            0.059357934 = score(doc=2417,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.38690117 = fieldWeight in 2417, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2417)
      0.33333334 = coord(1/3)
    
    Pages
    S.22-25
    Series
    Proceedings of the American Society for Information Science; vol. 20
    Source
    Productivity in the information age : proceedings of the 46th ASIS annual meeting, 1983. Ed.: Raymond F Vondra
  3. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.02
    0.020407092 = product of:
      0.06122127 = sum of:
        0.06122127 = sum of:
          0.025606511 = weight(_text_:of in 3564) [ClassicSimilarity], result of:
            0.025606511 = score(doc=3564,freq=26.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.37376386 = fieldWeight in 3564, product of:
                5.0990195 = tf(freq=26.0), with freq of:
                  26.0 = termFreq=26.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
          0.03561476 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
            0.03561476 = score(doc=3564,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.23214069 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
      0.33333334 = coord(1/3)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
    Source
    ASIS'89. Managing information and technology. Proceedings of the 52nd annual meeting of the American Society for Information Science, Washington D.C., 30.10.-2.11.1989. Vol.26. Ed.by J. Katzer and G.B. Newby
  4. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.02
    0.02002593 = product of:
      0.060077786 = sum of:
        0.060077786 = sum of:
          0.018527232 = weight(_text_:of in 5001) [ClassicSimilarity], result of:
            0.018527232 = score(doc=5001,freq=10.0), product of:
              0.06850986 = queryWeight, product of:
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.043811057 = queryNorm
              0.2704316 = fieldWeight in 5001, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.5637573 = idf(docFreq=25162, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.041550554 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.041550554 = score(doc=5001,freq=2.0), product of:
              0.15341885 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043811057 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.33333334 = coord(1/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  5. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.00
    0.0046012485 = product of:
      0.013803745 = sum of:
        0.013803745 = product of:
          0.02760749 = sum of:
            0.02760749 = weight(_text_:of in 3643) [ClassicSimilarity], result of:
              0.02760749 = score(doc=3643,freq=68.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.40297103 = fieldWeight in 3643, product of:
                  8.246211 = tf(freq=68.0), with freq of:
                    68.0 = termFreq=68.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  6. Hartley, D.: ¬A 'laboratory' method for the comparison of retrieval effectiveness in manual and online searching (1984) 0.00
    0.004463867 = product of:
      0.0133916 = sum of:
        0.0133916 = product of:
          0.0267832 = sum of:
            0.0267832 = weight(_text_:of in 8919) [ClassicSimilarity], result of:
              0.0267832 = score(doc=8919,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39093933 = fieldWeight in 8919, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8919)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper provides a brief review of a number of published studies of the comparative retrieval effectiveness of manual and online searching. A description of a 'laboratory' approach to the comparison of retrieval effectiveness of manual and online searching is presented. Results, which have been obtained, using this approach are presented. it is suggested that the methodology could be adopted easily elsewhere
  7. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.00
    0.004463867 = product of:
      0.0133916 = sum of:
        0.0133916 = product of:
          0.0267832 = sum of:
            0.0267832 = weight(_text_:of in 2494) [ClassicSimilarity], result of:
              0.0267832 = score(doc=2494,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.39093933 = fieldWeight in 2494, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.
    Source
    Canadian Journal of Information Science. 14(1989) no.2, S.26-46
  8. Fidel, R.: Online searching styles : a case-study-based model of searching behavior (1984) 0.00
    0.004428855 = product of:
      0.013286565 = sum of:
        0.013286565 = product of:
          0.02657313 = sum of:
            0.02657313 = weight(_text_:of in 1659) [ClassicSimilarity], result of:
              0.02657313 = score(doc=1659,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38787308 = fieldWeight in 1659, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1659)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The model of operationalist and conceptualist searching styles describes searching behavior of experienced online searchers. It is based on the systematic observation of five experienced online searchers doing their regular, job-related searches, and on the analysis of 10 to 13 searches conducted by each of them. Operationalist searchers aim at optimal strategies to achieve precise retrieval; they use a large range of system capabilities in their interaction. They preserve the specific meaning of the request, and the aim of their interactions is an answer set representing the request precisely. Conceptualist searchers analyze a request by seeking to fit it into a faceted structure. They first enter the facet that represents the most important aspect of the request. Their search is then centered on retrieving subsets from this primary set by introducing additional facets. In contrast to the operationalists, they are primarily concerned with recall. During the interaction they preserve the faceted structure, but may change the specific meaning of the request. Although not comprehensive, the model aids in recognizing special and individual characteristics of searching behavior which provide explanations of previous research and guidelines for further investigations into the search process
    Source
    Journal of the American Society for Information Science. 35(1984), S.211-221
  9. Gordon, M.; Kochen, M.: Recall-precision trade-off : a derivation (1989) 0.00
    0.0041003237 = product of:
      0.01230097 = sum of:
        0.01230097 = product of:
          0.02460194 = sum of:
            0.02460194 = weight(_text_:of in 4160) [ClassicSimilarity], result of:
              0.02460194 = score(doc=4160,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3591007 = fieldWeight in 4160, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4160)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The inexact nature of documnet retrieval gives rise to a fundamental recall precision trade-off: generally, recall improves at the expense of precision, or precision improves at the expense of recall. This trade-off os borne out emipically and has qualitatively intuitive explanations. In this article, we explore this realtionship mathematically to explain it further. We see that the recall-precision trade-off hinges on a declaration in the proportion of relevant documents which are retrieved, successively, over time. Futher we examine several mathematical functions sharing this property and conclude that the equation that best modealls recall as a function of time is a logarhitm of a quadratic function. Our conclusion meets the following requirements: the function we derive predicts non-decreasing recall over time until the last relevant document is retrieved (regardless of the density of relevant documents in the collection) without imposing any artificial restrictions on either what percentage of the collection would need to be examined to achieve perfect recall or what the level of precision would be at that time. Other models examined fail to meet oner or more of these criteria.
    Source
    Journal of the American Society for Information Science. 40(1989) no.3, S.145-151
  10. Bernstein, L.M.; Williamson, R.E.: Testing of a natural language retrieval system for a full text knowledge base (1984) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 1803) [ClassicSimilarity], result of:
              0.0234353 = score(doc=1803,freq=4.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 1803, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1803)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the American Society for Information Science. 35(1984), S.235-247
  11. Salton, G.: Thoughts about modern retrieval technologies (1988) 0.00
    0.0039058835 = product of:
      0.01171765 = sum of:
        0.01171765 = product of:
          0.0234353 = sum of:
            0.0234353 = weight(_text_:of in 1522) [ClassicSimilarity], result of:
              0.0234353 = score(doc=1522,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.34207192 = fieldWeight in 1522, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1522)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Paper presented at the 30th Annual Conference of the National Federation of Astracting and Information Services, Philadelphia, 28 Feb-2 Mar 88. In recent years, the amount and the variety of available machine-readable data, new technologies have been introduced, such as high density storage devices, and fancy graphic displays useful for information transformation and access. New approaches have also been considered for processing the stored data based on the construction of knowledge bases representing the contents and structure of the information, and the use of expert system techniques to control the user-system interactions. Provides a brief evaluation of the new information processing technologies, and of the software methods proposed for information manipulation.
  12. Prasher, R.G.: Evaluation of indexing system (1989) 0.00
    0.003865822 = product of:
      0.011597466 = sum of:
        0.011597466 = product of:
          0.023194931 = sum of:
            0.023194931 = weight(_text_:of in 4998) [ClassicSimilarity], result of:
              0.023194931 = score(doc=4998,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.33856338 = fieldWeight in 4998, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4998)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes information system and its various components-index file construstion, query formulation and searching. Discusses an indexing system, and brings out the need for its evaluation. Explains the concept of the efficiency of indexing systems and discusses factors which control this efficiency. Gives criteria for evaluation. Discusses recall and precision ratios, as also noise ratio, novelty ratio, and exhaustivity and specificity and the impact of each on the efficiency of indexing system. Mention also various steps for evaluation.
    Source
    Herald of library science. 28(1989) no.3, S.157-65
  13. Schabas, A.H.: Postcoordinate retrieval : a comparison of two retrieval languages (1982) 0.00
    0.003743066 = product of:
      0.0112291975 = sum of:
        0.0112291975 = product of:
          0.022458395 = sum of:
            0.022458395 = weight(_text_:of in 1202) [ClassicSimilarity], result of:
              0.022458395 = score(doc=1202,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32781258 = fieldWeight in 1202, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article reports on a comparison of the postcoordinate retrieval effectiveness of two indexing languages: LCSH and PRECIS. The effect of augmenting each with title words was also studies. The database for the study was over 15.000 UK MARC records. Users returned 5.326 relevant judgements for citations retrieved for 61 SDI profiles, representing a wide variety of subjects. Results are reported in terms of precision and relative recall. Pure/applied sciences data and social science data were analyzed separately. Cochran's significance tests for ratios were used to interpret the findings. Recall emerged as the more important measure discriminating the behavior of the two languages. Addition of title words was found to improve recall of both indexing languages significantly. A direct relationship was observed between recall and exhaustivity. For the social sciences searches, recalls from PRECIS alone and from PRECIS with title words were significantly higher than those from LCSH alone and from LCSH with title words, respectively. Corresponding comparisons for the pure/applied sciences searches revealed no significant differences
    Source
    Journal of the American Society for Information Science. 33(1982), S.32-37
  14. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.00
    0.0034396404 = product of:
      0.010318921 = sum of:
        0.010318921 = product of:
          0.020637842 = sum of:
            0.020637842 = weight(_text_:of in 3649) [ClassicSimilarity], result of:
              0.020637842 = score(doc=3649,freq=38.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.30123898 = fieldWeight in 3649, product of:
                  6.164414 = tf(freq=38.0), with freq of:
                    38.0 = termFreq=38.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3649)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
    Footnote
    Original in: Journal of the American Medical Association 207(1969) S.114-120.
    Source
    Theory of subject analysis: a sourcebook. Ed.: L.M. Chan, et al
  15. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.00
    0.003382594 = product of:
      0.010147782 = sum of:
        0.010147782 = product of:
          0.020295564 = sum of:
            0.020295564 = weight(_text_:of in 5517) [ClassicSimilarity], result of:
              0.020295564 = score(doc=5517,freq=12.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.29624295 = fieldWeight in 5517, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5517)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    89 articles from a small, Danish left-wing newspaper were indexed by PRECIS and KWIC. The articles cover a wide range of social science subjects. Controlled test searches in both indexes were carried out by 20 students of library science. The results obtained from this small-scale retrieval test were evaluated by a chi-square test. The PRECIS index led to more correct answers and fewer wrong answers than the KWIC index, i.e. it had both better recall and greater precision. Furthermore, the students were more confident in their judgement of the relevance of retrieved articles in the PRECIS index than in the KWIC index; and they generally favoured the PRECIS index in the subjective judgement they were asked to make
    Source
    Journal of librarianship. 14(1982), S.45-58
  16. Blair, D.C.: Full text retrieval : Evaluation and implications (1986) 0.00
    0.0033478998 = product of:
      0.010043699 = sum of:
        0.010043699 = product of:
          0.020087399 = sum of:
            0.020087399 = weight(_text_:of in 2047) [ClassicSimilarity], result of:
              0.020087399 = score(doc=2047,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2932045 = fieldWeight in 2047, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2047)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Recently, a detailed evaluation of a large, operational full-text document retrieval system was reported in the literature. Values of precision and recall were estimated usind traditional statistical sampling methods and blind evaluation procedures. The results of this evaluation demonstrated that the system tested was retrieving less then 20% of the relevant documents when the searchers believed it was retrieving over 75% of the relevant documents. This evaluation is described including some data not reported in the original article. Also discussed are the implications which this study has for how the subjects of documents should be represented, as well as the importance of rigorous retrieval evaluations for the furtherhance of information retrieval research
  17. Robertson, S.E.: ¬The methodology of information retrieval experiment (1981) 0.00
    0.0031564306 = product of:
      0.009469291 = sum of:
        0.009469291 = product of:
          0.018938582 = sum of:
            0.018938582 = weight(_text_:of in 3146) [ClassicSimilarity], result of:
              0.018938582 = score(doc=3146,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27643585 = fieldWeight in 3146, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.125 = fieldNorm(doc=3146)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  18. Tague, J.M.: ¬The pragmatics of information retrieval experimentation (1981) 0.00
    0.0031564306 = product of:
      0.009469291 = sum of:
        0.009469291 = product of:
          0.018938582 = sum of:
            0.018938582 = weight(_text_:of in 3149) [ClassicSimilarity], result of:
              0.018938582 = score(doc=3149,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27643585 = fieldWeight in 3149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.125 = fieldNorm(doc=3149)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  19. Keen, E.M.: Laboratory tests of manual systems (1981) 0.00
    0.0031564306 = product of:
      0.009469291 = sum of:
        0.009469291 = product of:
          0.018938582 = sum of:
            0.018938582 = weight(_text_:of in 3152) [ClassicSimilarity], result of:
              0.018938582 = score(doc=3152,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27643585 = fieldWeight in 3152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.125 = fieldNorm(doc=3152)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  20. Sievert, M.E.; McKinin, E.J.; Slough, M.: ¬A comparison of indexing and full-text for the retrieval of clinical medical literature (1988) 0.00
    0.0031316737 = product of:
      0.009395021 = sum of:
        0.009395021 = product of:
          0.018790042 = sum of:
            0.018790042 = weight(_text_:of in 3563) [ClassicSimilarity], result of:
              0.018790042 = score(doc=3563,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2742677 = fieldWeight in 3563, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The availability of two full text data bases in the clinical medical journal literature, MEDIS from Mead Data Central and CCML from BRS Information Technologies, provided an opportunity to compare the efficacy of the full text to the traditional, indexed system, MEDLINE for retrieval effectiveness. 100 searches were solicited from an academic health sciences library and the request were searched on all 3 data bases. The results were compared and preliminary analysis suggests that the full text data bases retrieve a greater number of relevant citations and MEDLINE achieves higher precision.
    Source
    ASIS'88. Information technology: planning for the next fifty years. Proceedings of the 51st annual meeting of the American Society for Information Science, Atlanta, Georgia, 23-27.10.1988. Vol.25. Ed. by C.L. Borgman and E.Y.H. Pai