Search (21 results, page 1 of 2)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1980 TO 1990}
  1. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.15
    0.14574726 = product of:
      0.24291208 = sum of:
        0.05648775 = weight(_text_:context in 5517) [ClassicSimilarity], result of:
          0.05648775 = score(doc=5517,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 5517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
        0.1538049 = weight(_text_:index in 5517) [ClassicSimilarity], result of:
          0.1538049 = score(doc=5517,freq=12.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.82782143 = fieldWeight in 5517, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
        0.03261943 = weight(_text_:system in 5517) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5517,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
      0.6 = coord(3/5)
    
    Abstract
    89 articles from a small, Danish left-wing newspaper were indexed by PRECIS and KWIC. The articles cover a wide range of social science subjects. Controlled test searches in both indexes were carried out by 20 students of library science. The results obtained from this small-scale retrieval test were evaluated by a chi-square test. The PRECIS index led to more correct answers and fewer wrong answers than the KWIC index, i.e. it had both better recall and greater precision. Furthermore, the students were more confident in their judgement of the relevance of retrieved articles in the PRECIS index than in the KWIC index; and they generally favoured the PRECIS index in the subjective judgement they were asked to make
    Theme
    Preserved Context Index System (PRECIS)
  2. Prasher, R.G.: Evaluation of indexing system (1989) 0.06
    0.058527745 = product of:
      0.14631936 = sum of:
        0.07176066 = weight(_text_:index in 4998) [ClassicSimilarity], result of:
          0.07176066 = score(doc=4998,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 4998, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=4998)
        0.0745587 = weight(_text_:system in 4998) [ClassicSimilarity], result of:
          0.0745587 = score(doc=4998,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5567675 = fieldWeight in 4998, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=4998)
      0.4 = coord(2/5)
    
    Abstract
    Describes information system and its various components-index file construstion, query formulation and searching. Discusses an indexing system, and brings out the need for its evaluation. Explains the concept of the efficiency of indexing systems and discusses factors which control this efficiency. Gives criteria for evaluation. Discusses recall and precision ratios, as also noise ratio, novelty ratio, and exhaustivity and specificity and the impact of each on the efficiency of indexing system. Mention also various steps for evaluation.
  3. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.05
    0.04696734 = product of:
      0.11741835 = sum of:
        0.07176066 = weight(_text_:index in 3649) [ClassicSimilarity], result of:
          0.07176066 = score(doc=3649,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 3649, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
        0.04565769 = weight(_text_:system in 3649) [ClassicSimilarity], result of:
          0.04565769 = score(doc=3649,freq=12.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3409491 = fieldWeight in 3649, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
      0.4 = coord(2/5)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
  4. Blair, D.C.; Maron, M.E.: ¬An evaluation of retrieval effectiveness for a full-text document-retrieval system (1985) 0.03
    0.026390245 = product of:
      0.065975614 = sum of:
        0.046599183 = weight(_text_:system in 1345) [ClassicSimilarity], result of:
          0.046599183 = score(doc=1345,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.3479797 = fieldWeight in 1345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.078125 = fieldNorm(doc=1345)
        0.01937643 = product of:
          0.05812929 = sum of:
            0.05812929 = weight(_text_:29 in 1345) [ClassicSimilarity], result of:
              0.05812929 = score(doc=1345,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38865322 = fieldWeight in 1345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1345)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Footnote
    Vgl. auch : Salton, G.: Another look ... Comm. ACM 29(1986) S.S.648-656; Blair, D.C.: Full text retrieval ... Int. Class. 13(1986) S.18-23: Blair, D.C., M.E. Maron: Full-text information retrieval ... Inf. proc. man. 26(1990) S.437-447.
  5. Blair, D.C.: Full text retrieval : Evaluation and implications (1986) 0.02
    0.020466631 = product of:
      0.05116658 = sum of:
        0.03954072 = weight(_text_:system in 2047) [ClassicSimilarity], result of:
          0.03954072 = score(doc=2047,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 2047, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2047)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 2047) [ClassicSimilarity], result of:
              0.034877572 = score(doc=2047,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 2047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2047)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Recently, a detailed evaluation of a large, operational full-text document retrieval system was reported in the literature. Values of precision and recall were estimated usind traditional statistical sampling methods and blind evaluation procedures. The results of this evaluation demonstrated that the system tested was retrieving less then 20% of the relevant documents when the searchers believed it was retrieving over 75% of the relevant documents. This evaluation is described including some data not reported in the original article. Also discussed are the implications which this study has for how the subjects of documents should be represented, as well as the importance of rigorous retrieval evaluations for the furtherhance of information retrieval research
    Footnote
    Vgl.: Blair, D.C., M.E. Maron: An evaluation ... Comm. ACM 28(1985) S.280-299; Salton, G.: Another look ... Comm. ACM 29(1986) S.648-656; Blair, D.C., M.E. Maron: Full-text information retrieval ... Inf. Proc. Man. 26(1990) S.437-447.
  6. Cooper, W.S.: Gedanken experimentation : an alternative to traditional system testing? (1981) 0.01
    0.01491174 = product of:
      0.0745587 = sum of:
        0.0745587 = weight(_text_:system in 3155) [ClassicSimilarity], result of:
          0.0745587 = score(doc=3155,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5567675 = fieldWeight in 3155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.125 = fieldNorm(doc=3155)
      0.2 = coord(1/5)
    
  7. Sparck Jones, K.: Retrieval system tests 1958-1978 (1981) 0.01
    0.01491174 = product of:
      0.0745587 = sum of:
        0.0745587 = weight(_text_:system in 3156) [ClassicSimilarity], result of:
          0.0745587 = score(doc=3156,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.5567675 = fieldWeight in 3156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.125 = fieldNorm(doc=3156)
      0.2 = coord(1/5)
    
  8. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.01
    0.014352133 = product of:
      0.07176066 = sum of:
        0.07176066 = weight(_text_:index in 3643) [ClassicSimilarity], result of:
          0.07176066 = score(doc=3643,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3862362 = fieldWeight in 3643, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
      0.2 = coord(1/5)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.
  9. Bernstein, L.M.; Williamson, R.E.: Testing of a natural language retrieval system for a full text knowledge base (1984) 0.01
    0.013047772 = product of:
      0.06523886 = sum of:
        0.06523886 = weight(_text_:system in 1803) [ClassicSimilarity], result of:
          0.06523886 = score(doc=1803,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 1803, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.109375 = fieldNorm(doc=1803)
      0.2 = coord(1/5)
    
  10. Robertson, S.E.; Thompson, C.L.: ¬An operational evaluation of weighting, ranking and relevance feedback via a front-end system (1987) 0.01
    0.013047772 = product of:
      0.06523886 = sum of:
        0.06523886 = weight(_text_:system in 3858) [ClassicSimilarity], result of:
          0.06523886 = score(doc=3858,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 3858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.109375 = fieldNorm(doc=3858)
      0.2 = coord(1/5)
    
  11. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.01
    0.013047772 = product of:
      0.06523886 = sum of:
        0.06523886 = weight(_text_:system in 5004) [ClassicSimilarity], result of:
          0.06523886 = score(doc=5004,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4871716 = fieldWeight in 5004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.109375 = fieldNorm(doc=5004)
      0.2 = coord(1/5)
    
  12. Information retrieval experiment (1981) 0.01
    0.011299702 = product of:
      0.056498513 = sum of:
        0.056498513 = weight(_text_:system in 2653) [ClassicSimilarity], result of:
          0.056498513 = score(doc=2653,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.42190298 = fieldWeight in 2653, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2653)
      0.2 = coord(1/5)
    
    Content
    Enthält die Beiträge: ROBERTSON, S.E.: The methodology of information retrieval experiment; RIJSBERGEN, C.J. van: Retrieval effectiveness; BELKIN, N.: Ineffable concepts in information retrieval; TAGUE, J.M.: The pragmatics of information retrieval experimentation; LANCASTER, F.W.: Evaluation within the environment of an operating information service; BARRACLOUGH, E.D.: Opportunities for testing with online systems; KEEN, M.E.: Laboratory tests of manual systems; ODDY, R.N.: Laboratory tests: automatic systems; HEINE, M.D.: Simulation, and simulation experiments; COOPER, W.S.: Gedanken experimentation: an alternative to traditional system testing?; SPARCK JONES, K.: Actual tests - retrieval system tests; EVANS, L.: An experiment: search strategy variation in SDI profiles; SALTON, G.: The Smart environment for retrieval system evaluation - advantage and problem areas
  13. Salton, G.: Thoughts about modern retrieval technologies (1988) 0.01
    0.0092261685 = product of:
      0.04613084 = sum of:
        0.04613084 = weight(_text_:system in 1522) [ClassicSimilarity], result of:
          0.04613084 = score(doc=1522,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.34448233 = fieldWeight in 1522, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1522)
      0.2 = coord(1/5)
    
    Abstract
    Paper presented at the 30th Annual Conference of the National Federation of Astracting and Information Services, Philadelphia, 28 Feb-2 Mar 88. In recent years, the amount and the variety of available machine-readable data, new technologies have been introduced, such as high density storage devices, and fancy graphic displays useful for information transformation and access. New approaches have also been considered for processing the stored data based on the construction of knowledge bases representing the contents and structure of the information, and the use of expert system techniques to control the user-system interactions. Provides a brief evaluation of the new information processing technologies, and of the software methods proposed for information manipulation.
  14. Lochbaum, K.E.; Streeter, A.R.: Comparing and combining the effectiveness of latent semantic indexing and the ordinary vector space model for information retrieval (1989) 0.01
    0.007908144 = product of:
      0.03954072 = sum of:
        0.03954072 = weight(_text_:system in 3458) [ClassicSimilarity], result of:
          0.03954072 = score(doc=3458,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 3458, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.2 = coord(1/5)
    
    Abstract
    A retrievalsystem was built to find individuals with appropriate expertise within a large research establishment on the basis of their authored documents. The expert-locating system uses a new method for automatic indexing and retrieval based on singular value decomposition, a matrix decomposition technique related to the factor analysis. Organizational groups, represented by the documents they write, and the terms contained in these documents, are fit simultaneously into a 100-dimensional "semantic" space. User queries are positioned in the semantic space, and the most similar groups are returned to the user. Here we compared the standard vector-space model with this new technique and found that combining the two methods improved performance over either alone. We also examined the effects of various experimental variables on the system`s retrieval accuracy. In particular, the effects of: term weighting functions in the semantic space construction and in query construction, suffix stripping, and using lexical units larger than a a single word were studied.
  15. Fidel, R.: Online searching styles : a case-study-based model of searching behavior (1984) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 1659) [ClassicSimilarity], result of:
          0.027959513 = score(doc=1659,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 1659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1659)
      0.2 = coord(1/5)
    
    Abstract
    The model of operationalist and conceptualist searching styles describes searching behavior of experienced online searchers. It is based on the systematic observation of five experienced online searchers doing their regular, job-related searches, and on the analysis of 10 to 13 searches conducted by each of them. Operationalist searchers aim at optimal strategies to achieve precise retrieval; they use a large range of system capabilities in their interaction. They preserve the specific meaning of the request, and the aim of their interactions is an answer set representing the request precisely. Conceptualist searchers analyze a request by seeking to fit it into a faceted structure. They first enter the facet that represents the most important aspect of the request. Their search is then centered on retrieving subsets from this primary set by introducing additional facets. In contrast to the operationalists, they are primarily concerned with recall. During the interaction they preserve the faceted structure, but may change the specific meaning of the request. Although not comprehensive, the model aids in recognizing special and individual characteristics of searching behavior which provide explanations of previous research and guidelines for further investigations into the search process
  16. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 2288) [ClassicSimilarity], result of:
          0.027959513 = score(doc=2288,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 2288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.2 = coord(1/5)
    
    Abstract
    A pilot study on the relative retrieval effectiveness of semantic relevance (by terms) and pragmatic relevance (by citations) is reported. A single database has been constructed to provide access by both descriptors and cited references. For each question from a set of queries, two equivalent sets were retrieved. All retrieved items were evaluated by subject experts for relevance to their originating queries. We conclude that there are essentially two types of relevance at work resulting in two different sets of documents. Using both search methods to create a union set is likely to increase recall. Those few retrieved by the intersection of the two methods tend to result in higher precision. Suggestions are made to develop a front-end system to display the overlapping items for higher precision and to manipulate and rank the union set sets retrieved by the two search modes for improved output
  17. Sievert, M.E.; McKinin, E.J.; Slough, M.: ¬A comparison of indexing and full-text for the retrieval of clinical medical literature (1988) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 3563) [ClassicSimilarity], result of:
          0.027959513 = score(doc=3563,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 3563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3563)
      0.2 = coord(1/5)
    
    Abstract
    The availability of two full text data bases in the clinical medical journal literature, MEDIS from Mead Data Central and CCML from BRS Information Technologies, provided an opportunity to compare the efficacy of the full text to the traditional, indexed system, MEDLINE for retrieval effectiveness. 100 searches were solicited from an academic health sciences library and the request were searched on all 3 data bases. The results were compared and preliminary analysis suggests that the full text data bases retrieve a greater number of relevant citations and MEDLINE achieves higher precision.
  18. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.005376595 = product of:
      0.026882974 = sum of:
        0.026882974 = product of:
          0.08064892 = sum of:
            0.08064892 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.08064892 = score(doc=262,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    20.10.2000 12:22:23
  19. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.00
    0.0038404248 = product of:
      0.019202124 = sum of:
        0.019202124 = product of:
          0.057606373 = sum of:
            0.057606373 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.057606373 = score(doc=2417,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Pages
    S.22-25
  20. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.00
    0.0026882975 = product of:
      0.013441487 = sum of:
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.04032446 = score(doc=5001,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    14. 3.1996 13:22:21