Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985)
0.01
0.0072885007 = product of:
0.014577001 = sum of:
0.014577001 = product of:
0.029154003 = sum of:
0.029154003 = weight(_text_:w in 3649) [ClassicSimilarity], result of:
0.029154003 = score(doc=3649,freq=2.0), product of:
0.17310768 = queryWeight, product of:
3.8108058 = idf(docFreq=2659, maxDocs=44218)
0.045425482 = queryNorm
0.16841541 = fieldWeight in 3649, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.8108058 = idf(docFreq=2659, maxDocs=44218)
0.03125 = fieldNorm(doc=3649)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.