Search (27 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[1980 TO 1990}
  1. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.00
    0.002782962 = product of:
      0.02782962 = sum of:
        0.008324308 = weight(_text_:in in 2417) [ClassicSimilarity], result of:
          0.008324308 = score(doc=2417,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.21253976 = fieldWeight in 2417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
        0.01950531 = product of:
          0.03901062 = sum of:
            0.03901062 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.03901062 = score(doc=2417,freq=2.0), product of:
                0.10082839 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02879306 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.1 = coord(2/20)
    
    Pages
    S.22-25
    Source
    Productivity in the information age : proceedings of the 46th ASIS annual meeting, 1983. Ed.: Raymond F Vondra
  2. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.00
    0.0025307748 = product of:
      0.025307748 = sum of:
        0.011654032 = weight(_text_:in in 5001) [ClassicSimilarity], result of:
          0.011654032 = score(doc=5001,freq=16.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.29755569 = fieldWeight in 5001, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.013653717 = product of:
          0.027307434 = sum of:
            0.027307434 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.027307434 = score(doc=5001,freq=2.0), product of:
                0.10082839 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02879306 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.1 = coord(2/20)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  3. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.00
    0.0017914899 = product of:
      0.017914899 = sum of:
        0.009758773 = weight(_text_:des in 3643) [ClassicSimilarity], result of:
          0.009758773 = score(doc=3643,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.12238726 = fieldWeight in 3643, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
        0.0081561245 = weight(_text_:in in 3643) [ClassicSimilarity], result of:
          0.0081561245 = score(doc=3643,freq=24.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.2082456 = fieldWeight in 3643, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3643)
      0.1 = coord(2/20)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
    Original in: Aslib proceedings 15(1963) no.4, S.106-130.
  4. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.00
    0.0016697772 = product of:
      0.016697772 = sum of:
        0.0049945856 = weight(_text_:in in 3564) [ClassicSimilarity], result of:
          0.0049945856 = score(doc=3564,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.12752387 = fieldWeight in 3564, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.011703186 = product of:
          0.023406371 = sum of:
            0.023406371 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.023406371 = score(doc=3564,freq=2.0), product of:
                0.10082839 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02879306 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.1 = coord(2/20)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  5. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.00
    0.0015988116 = product of:
      0.015988115 = sum of:
        0.009758773 = weight(_text_:des in 3649) [ClassicSimilarity], result of:
          0.009758773 = score(doc=3649,freq=2.0), product of:
            0.079736836 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.02879306 = queryNorm
            0.12238726 = fieldWeight in 3649, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
        0.006229343 = weight(_text_:in in 3649) [ClassicSimilarity], result of:
          0.006229343 = score(doc=3649,freq=14.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.15905021 = fieldWeight in 3649, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3649)
      0.1 = coord(2/20)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
    Footnote
    Nachdruck des Originalartikels mit Kommentierung durch die Herausgeber
    Original in: Journal of the American Medical Association 207(1969) S.114-120.
  6. MacCain, K.W.; White, H.D.; Griffith, B.C.: Comparing retrieval performance in online data bases (1987) 0.00
    5.305727E-4 = product of:
      0.010611454 = sum of:
        0.010611454 = weight(_text_:in in 1167) [ClassicSimilarity], result of:
          0.010611454 = score(doc=1167,freq=26.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.27093613 = fieldWeight in 1167, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1167)
      0.05 = coord(1/20)
    
    Abstract
    This study systematically compares retrievals on 11 topics across five well-known data bases, with MEDLINE's subject indexing as a focus. Each topic was posed by a researcher in the medical behavioral sciences. Each was searches in MEDLINE, EXCERPTA MEDICA, and PSYCHINFO, which permit descriptor searches, and in SCISEARCH and SOCIAL SCISEARCH, which express topics through cited references. Searches on each topic were made with (1) descriptors, (2) cited references, and (3) natural language (a capabiblity common to all five data bases). The researchers who posed the topics judged the results. In every case, the set of records judged relevant was used to to calculate recall, precision, and novelty ratios. Overall, MEDLINE had the highest recall percentage (37%), followed by SSCI (31%). All searches resulted in high precision ratios; novelty ratios of data bases and searches varied widely. Differences in record format among data bases affected the success of the natural language retrievals. Some 445 documents judged relevant were not retrieved from MEDLINE using its descriptors; they were found in MEDLINE through natural language or in an alternative data base. An analysis was performed to examine possible faults in MEDLINE subject indexing as the reason for their nonretrieval. However, no patterns of indexing failure could be seen in those documents subsequently found in MEDLINE through known-item searches. Documents not found in MEDLINE primarily represent failures of coverage - articles were from nonindexed or selectively indexed journals
  7. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.00
    5.0463446E-4 = product of:
      0.010092689 = sum of:
        0.010092689 = weight(_text_:in in 5517) [ClassicSimilarity], result of:
          0.010092689 = score(doc=5517,freq=12.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.2576908 = fieldWeight in 5517, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
      0.05 = coord(1/20)
    
    Abstract
    89 articles from a small, Danish left-wing newspaper were indexed by PRECIS and KWIC. The articles cover a wide range of social science subjects. Controlled test searches in both indexes were carried out by 20 students of library science. The results obtained from this small-scale retrieval test were evaluated by a chi-square test. The PRECIS index led to more correct answers and fewer wrong answers than the KWIC index, i.e. it had both better recall and greater precision. Furthermore, the students were more confident in their judgement of the relevance of retrieved articles in the PRECIS index than in the KWIC index; and they generally favoured the PRECIS index in the subjective judgement they were asked to make
  8. Peritz, B.C.: On the informativeness of titles (1984) 0.00
    5.0463446E-4 = product of:
      0.010092689 = sum of:
        0.010092689 = weight(_text_:in in 2636) [ClassicSimilarity], result of:
          0.010092689 = score(doc=2636,freq=12.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.2576908 = fieldWeight in 2636, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2636)
      0.05 = coord(1/20)
    
    Abstract
    The frequency of non-informative titles of journal articles was assessed for two fields: library and information science and sociology. The percentage of non informative titles was 21% in the formaer and 15% in the latter. In both fields, the non-informative titles, were concentratein only a few journals. The non-informative titles in library science were derived mainly from non-research journals. IN sociology the reasons for non-informative titles may be more complex; some of these journals are highly cited. For the improvement of retrievaleffiency the adoption of a policy encouraging informative titles (as in journals of chemistry) is recommended.
  9. Belkin, N.J.: Ineffable concepts in information retrieval (1981) 0.00
    4.70894E-4 = product of:
      0.00941788 = sum of:
        0.00941788 = weight(_text_:in in 3148) [ClassicSimilarity], result of:
          0.00941788 = score(doc=3148,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.24046129 = fieldWeight in 3148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=3148)
      0.05 = coord(1/20)
    
  10. Evans, L.: ¬An experiment : search strategy variations in SDI (1981) 0.00
    4.70894E-4 = product of:
      0.00941788 = sum of:
        0.00941788 = weight(_text_:in in 3158) [ClassicSimilarity], result of:
          0.00941788 = score(doc=3158,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.24046129 = fieldWeight in 3158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=3158)
      0.05 = coord(1/20)
    
  11. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.00
    4.1203224E-4 = product of:
      0.008240645 = sum of:
        0.008240645 = weight(_text_:in in 5004) [ClassicSimilarity], result of:
          0.008240645 = score(doc=5004,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.21040362 = fieldWeight in 5004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=5004)
      0.05 = coord(1/20)
    
  12. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.00
    3.9485664E-4 = product of:
      0.007897133 = sum of:
        0.007897133 = weight(_text_:in in 2290) [ClassicSimilarity], result of:
          0.007897133 = score(doc=2290,freq=10.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.20163295 = fieldWeight in 2290, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.05 = coord(1/20)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
  13. Lochbaum, K.E.; Streeter, A.R.: Comparing and combining the effectiveness of latent semantic indexing and the ordinary vector space model for information retrieval (1989) 0.00
    3.9485664E-4 = product of:
      0.007897133 = sum of:
        0.007897133 = weight(_text_:in in 3458) [ClassicSimilarity], result of:
          0.007897133 = score(doc=3458,freq=10.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.20163295 = fieldWeight in 3458, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3458)
      0.05 = coord(1/20)
    
    Abstract
    A retrievalsystem was built to find individuals with appropriate expertise within a large research establishment on the basis of their authored documents. The expert-locating system uses a new method for automatic indexing and retrieval based on singular value decomposition, a matrix decomposition technique related to the factor analysis. Organizational groups, represented by the documents they write, and the terms contained in these documents, are fit simultaneously into a 100-dimensional "semantic" space. User queries are positioned in the semantic space, and the most similar groups are returned to the user. Here we compared the standard vector-space model with this new technique and found that combining the two methods improved performance over either alone. We also examined the effects of various experimental variables on the system`s retrieval accuracy. In particular, the effects of: term weighting functions in the semantic space construction and in query construction, suffix stripping, and using lexical units larger than a a single word were studied.
  14. Kilgour, F.G.: Retrieval on information from computerized book texts (1989) 0.00
    3.531705E-4 = product of:
      0.00706341 = sum of:
        0.00706341 = weight(_text_:in in 2965) [ClassicSimilarity], result of:
          0.00706341 = score(doc=2965,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.18034597 = fieldWeight in 2965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=2965)
      0.05 = coord(1/20)
    
    Source
    Academic librarianship past, present, and future. A festschrift in honor of David Kaser: Ed. by J. Richardson u. J.Y. Davis
  15. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.00
    3.531705E-4 = product of:
      0.00706341 = sum of:
        0.00706341 = weight(_text_:in in 3560) [ClassicSimilarity], result of:
          0.00706341 = score(doc=3560,freq=8.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.18034597 = fieldWeight in 3560, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3560)
      0.05 = coord(1/20)
    
    Abstract
    40 doctoral student were trained to search INSPEC or ERIC on DIALOG using either the Sci-Mate Menu or native commands. In comparison with 20 control subjects for whom a free search was performed by an intermediary, the experiment subjects were no less satisfied with their retrievals, which were fewer in number but higher in precision than the retrievals produced by the intermediaries. Use of the menu interface did not affect quality of retrieval or user satisfaction, although subjects instructed to use native commands required less training time and interacted more with the data bases than did subjects trained on the Sci-Mate Menu. INSPEC subjects placed a higher monetary value on their searches than did ERIC subjects, indicated that they would make more frequent use of ddata bases in the future, and interacted more with the data base.
  16. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.00
    3.3297235E-4 = product of:
      0.006659447 = sum of:
        0.006659447 = weight(_text_:in in 2494) [ClassicSimilarity], result of:
          0.006659447 = score(doc=2494,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.17003182 = fieldWeight in 2494, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2494)
      0.05 = coord(1/20)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.
  17. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.00
    3.3297235E-4 = product of:
      0.006659447 = sum of:
        0.006659447 = weight(_text_:in in 3566) [ClassicSimilarity], result of:
          0.006659447 = score(doc=3566,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.17003182 = fieldWeight in 3566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
      0.05 = coord(1/20)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
  18. Fidel, R.: Online searching styles : a case-study-based model of searching behavior (1984) 0.00
    3.0585466E-4 = product of:
      0.006117093 = sum of:
        0.006117093 = weight(_text_:in in 1659) [ClassicSimilarity], result of:
          0.006117093 = score(doc=1659,freq=6.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.1561842 = fieldWeight in 1659, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1659)
      0.05 = coord(1/20)
    
    Abstract
    The model of operationalist and conceptualist searching styles describes searching behavior of experienced online searchers. It is based on the systematic observation of five experienced online searchers doing their regular, job-related searches, and on the analysis of 10 to 13 searches conducted by each of them. Operationalist searchers aim at optimal strategies to achieve precise retrieval; they use a large range of system capabilities in their interaction. They preserve the specific meaning of the request, and the aim of their interactions is an answer set representing the request precisely. Conceptualist searchers analyze a request by seeking to fit it into a faceted structure. They first enter the facet that represents the most important aspect of the request. Their search is then centered on retrieving subsets from this primary set by introducing additional facets. In contrast to the operationalists, they are primarily concerned with recall. During the interaction they preserve the faceted structure, but may change the specific meaning of the request. Although not comprehensive, the model aids in recognizing special and individual characteristics of searching behavior which provide explanations of previous research and guidelines for further investigations into the search process
  19. Gordon, M.; Kochen, M.: Recall-precision trade-off : a derivation (1989) 0.00
    3.0585466E-4 = product of:
      0.006117093 = sum of:
        0.006117093 = weight(_text_:in in 4160) [ClassicSimilarity], result of:
          0.006117093 = score(doc=4160,freq=6.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.1561842 = fieldWeight in 4160, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4160)
      0.05 = coord(1/20)
    
    Abstract
    The inexact nature of documnet retrieval gives rise to a fundamental recall precision trade-off: generally, recall improves at the expense of precision, or precision improves at the expense of recall. This trade-off os borne out emipically and has qualitatively intuitive explanations. In this article, we explore this realtionship mathematically to explain it further. We see that the recall-precision trade-off hinges on a declaration in the proportion of relevant documents which are retrieved, successively, over time. Futher we examine several mathematical functions sharing this property and conclude that the equation that best modealls recall as a function of time is a logarhitm of a quadratic function. Our conclusion meets the following requirements: the function we derive predicts non-decreasing recall over time until the last relevant document is retrieved (regardless of the density of relevant documents in the collection) without imposing any artificial restrictions on either what percentage of the collection would need to be examined to achieve perfect recall or what the level of precision would be at that time. Other models examined fail to meet oner or more of these criteria.
  20. Information retrieval experiment (1981) 0.00
    2.9135082E-4 = product of:
      0.005827016 = sum of:
        0.005827016 = weight(_text_:in in 2653) [ClassicSimilarity], result of:
          0.005827016 = score(doc=2653,freq=4.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.14877784 = fieldWeight in 2653, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2653)
      0.05 = coord(1/20)
    
    Content
    Enthält die Beiträge: ROBERTSON, S.E.: The methodology of information retrieval experiment; RIJSBERGEN, C.J. van: Retrieval effectiveness; BELKIN, N.: Ineffable concepts in information retrieval; TAGUE, J.M.: The pragmatics of information retrieval experimentation; LANCASTER, F.W.: Evaluation within the environment of an operating information service; BARRACLOUGH, E.D.: Opportunities for testing with online systems; KEEN, M.E.: Laboratory tests of manual systems; ODDY, R.N.: Laboratory tests: automatic systems; HEINE, M.D.: Simulation, and simulation experiments; COOPER, W.S.: Gedanken experimentation: an alternative to traditional system testing?; SPARCK JONES, K.: Actual tests - retrieval system tests; EVANS, L.: An experiment: search strategy variation in SDI profiles; SALTON, G.: The Smart environment for retrieval system evaluation - advantage and problem areas