Search (56 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.06
    0.06437073 = product of:
      0.19311218 = sum of:
        0.19311218 = weight(_text_:citation in 2290) [ClassicSimilarity], result of:
          0.19311218 = score(doc=2290,freq=14.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.82245487 = fieldWeight in 2290, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.33333334 = coord(1/3)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
    Theme
    Citation indexing
  2. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.05
    0.0458768 = product of:
      0.1376304 = sum of:
        0.1376304 = weight(_text_:citation in 3566) [ClassicSimilarity], result of:
          0.1376304 = score(doc=3566,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.58616084 = fieldWeight in 3566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
      0.33333334 = coord(1/3)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
  3. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.03
    0.0344076 = product of:
      0.1032228 = sum of:
        0.1032228 = weight(_text_:citation in 2288) [ClassicSimilarity], result of:
          0.1032228 = score(doc=2288,freq=4.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.4396206 = fieldWeight in 2288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  4. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.03
    0.03018799 = product of:
      0.09056397 = sum of:
        0.09056397 = product of:
          0.18112794 = sum of:
            0.18112794 = weight(_text_:index in 5517) [ClassicSimilarity], result of:
              0.18112794 = score(doc=5517,freq=12.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.82782143 = fieldWeight in 5517, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5517)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    89 articles from a small, Danish left-wing newspaper were indexed by PRECIS and KWIC. The articles cover a wide range of social science subjects. Controlled test searches in both indexes were carried out by 20 students of library science. The results obtained from this small-scale retrieval test were evaluated by a chi-square test. The PRECIS index led to more correct answers and fewer wrong answers than the KWIC index, i.e. it had both better recall and greater precision. Furthermore, the students were more confident in their judgement of the relevance of retrieved articles in the PRECIS index than in the KWIC index; and they generally favoured the PRECIS index in the subjective judgement they were asked to make
    Theme
    Preserved Context Index System (PRECIS)
  5. Aitchison, T.M.: Comparative evaluation of index languages : Part I, Design. Part II, Results (1969) 0.02
    0.024648389 = product of:
      0.073945165 = sum of:
        0.073945165 = product of:
          0.14789033 = sum of:
            0.14789033 = weight(_text_:index in 561) [ClassicSimilarity], result of:
              0.14789033 = score(doc=561,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.67591333 = fieldWeight in 561, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.109375 = fieldNorm(doc=561)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  6. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1963) 0.02
    0.02112719 = product of:
      0.06338157 = sum of:
        0.06338157 = product of:
          0.12676314 = sum of:
            0.12676314 = weight(_text_:index in 577) [ClassicSimilarity], result of:
              0.12676314 = score(doc=577,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5793543 = fieldWeight in 577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.09375 = fieldNorm(doc=577)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  7. Thornley, C.V.; Johnson, A.C.; Smeaton, A.F.; Lee, H.: ¬The scholarly impact of TRECVid (2003-2009) (2011) 0.02
    0.020274874 = product of:
      0.06082462 = sum of:
        0.06082462 = weight(_text_:citation in 4363) [ClassicSimilarity], result of:
          0.06082462 = score(doc=4363,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.25904894 = fieldWeight in 4363, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4363)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper reports on an investigation into the scholarly impact of the TRECVid (Text Retrieval and Evaluation Conference, Video Retrieval Evaluation) benchmarking conferences between 2003 and 2009. The contribution of TRECVid to research in video retrieval is assessed by analyzing publication content to show the development of techniques and approaches over time and by analyzing publication impact through publication numbers and citation analysis. Popular conference and journal venues for TRECVid publications are identified in terms of number of citations received. For a selection of participants at different career stages, the relative importance of TRECVid publications in terms of citations vis à vis their other publications is investigated. TRECVid, as an evaluation conference, provides data on which research teams 'scored' highly against the evaluation criteria and the relationship between 'top scoring' teams at TRECVid and the 'top scoring' papers in terms of citations is analyzed. A strong relationship was found between 'success' at TRECVid and 'success' at citations both for high scoring and low scoring teams. The implications of the study in terms of the value of TRECVid as a research activity, and the value of bibliometric analysis as a research evaluation tool, are discussed.
  8. Oberhauser, O.; Labner, J.: OPAC-Erweiterung durch automatische Indexierung : Empirische Untersuchung mit Daten aus dem Österreichischen Verbundkatalog (2002) 0.02
    0.018296685 = product of:
      0.05489005 = sum of:
        0.05489005 = product of:
          0.1097801 = sum of:
            0.1097801 = weight(_text_:index in 883) [ClassicSimilarity], result of:
              0.1097801 = score(doc=883,freq=6.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.50173557 = fieldWeight in 883, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.046875 = fieldNorm(doc=883)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In Anlehnung an die in den neunziger Jahren durchgeführten Erschließungsprojekte MILOS I und MILOS II, die die Eignung eines Verfahrens zur automatischen Indexierung für Bibliothekskataloge zum Thema hatten, wurde eine empirische Untersuchung anhand einer repräsentativen Stichprobe von Titelsätzen aus dem Österreichischen Verbundkatalog durchgeführt. Ziel war die Prüfung und Bewertung der Einsatzmöglichkeit dieses Verfahrens in den Online-Katalogen des Verbundes. Der Realsituation der OPAC-Benutzung gemäß wurde ausschließlich die Auswirkung auf den automatisch generierten Begriffen angereicherten Basic Index ("Alle Felder") untersucht. Dazu wurden 100 Suchanfragen zunächst im ursprünglichen Basic Index und sodann im angereicherten Basic Index in einem OPAC unter Aleph 500 durchgeführt. Die Tests erbrachten einen Zuwachs an relevanten Treffern bei nur leichten Verlusten an Precision, eine Reduktion der Nulltreffer-Ergebnisse sowie Aufschlüsse über die Auswirkung einer vorhandenen verbalen Sacherschließung.
  9. Cooper, M.D.; Chen, H.-M.: Predicting the relevance of a library catalog search (2001) 0.02
    0.016219899 = product of:
      0.048659697 = sum of:
        0.048659697 = weight(_text_:citation in 6519) [ClassicSimilarity], result of:
          0.048659697 = score(doc=6519,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.20723915 = fieldWeight in 6519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=6519)
      0.33333334 = coord(1/3)
    
    Abstract
    Relevance has been a difficult concept to define, let alone measure. In this paper, a simple operational definition of relevance is proposed for a Web-based library catalog: whether or not during a search session the user saves, prints, mails, or downloads a citation. If one of those actions is performed, the session is considered relevant to the user. An analysis is presented illustrating the advantages and disadvantages of this definition. With this definition and good transaction logging, it is possible to ascertain the relevance of a session. This was done for 905,970 sessions conducted with the University of California's Melvyl online catalog. Next, a methodology was developed to try to predict the relevance of a session. A number of variables were defined that characterize a session, none of which used any demographic information about the user. The values of the variables were computed for the sessions. Principal components analysis was used to extract a new set of variables out of the original set. A stratified random sampling technique was used to form ten strata such that each new strata of 90,570 sessions contained the same proportion of relevant to nonrelevant sessions. Logistic regression was used to ascertain the regression coefficients for nine of the ten strata. Then, the coefficients were used to predict the relevance of the sessions in the missing strata. Overall, 17.85% of the sessions were determined to be relevant. The predicted number of relevant sessions for all ten strata was 11 %, a 6.85% difference. The authors believe that the methodology can be further refined and the prediction improved. This methodology could also have significant application in improving user searching and also in predicting electronic commerce buying decisions without the use of personal demographic data
  10. Borlund, P.: ¬A study of the use of simulated work task situations in interactive information retrieval evaluations : a meta-evaluation (2016) 0.02
    0.016219899 = product of:
      0.048659697 = sum of:
        0.048659697 = weight(_text_:citation in 2880) [ClassicSimilarity], result of:
          0.048659697 = score(doc=2880,freq=2.0), product of:
            0.23479973 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.050071523 = queryNorm
            0.20723915 = fieldWeight in 2880, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=2880)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement. Design/methodology/approach - In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis. Findings - The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants' personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies. Research limitations/implications - Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Originality/value - Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990's. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.
  11. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.015829332 = product of:
      0.047487997 = sum of:
        0.047487997 = product of:
          0.09497599 = sum of:
            0.09497599 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09497599 = score(doc=262,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  12. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.015829332 = product of:
      0.047487997 = sum of:
        0.047487997 = product of:
          0.09497599 = sum of:
            0.09497599 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.09497599 = score(doc=6418,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  13. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.015829332 = product of:
      0.047487997 = sum of:
        0.047487997 = product of:
          0.09497599 = sum of:
            0.09497599 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09497599 = score(doc=6438,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  14. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.015829332 = product of:
      0.047487997 = sum of:
        0.047487997 = product of:
          0.09497599 = sum of:
            0.09497599 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09497599 = score(doc=5089,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 18:43:54
  15. Prasher, R.G.: Evaluation of indexing system (1989) 0.01
    0.014084793 = product of:
      0.042254377 = sum of:
        0.042254377 = product of:
          0.084508754 = sum of:
            0.084508754 = weight(_text_:index in 4998) [ClassicSimilarity], result of:
              0.084508754 = score(doc=4998,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3862362 = fieldWeight in 4998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4998)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes information system and its various components-index file construstion, query formulation and searching. Discusses an indexing system, and brings out the need for its evaluation. Explains the concept of the efficiency of indexing systems and discusses factors which control this efficiency. Gives criteria for evaluation. Discusses recall and precision ratios, as also noise ratio, novelty ratio, and exhaustivity and specificity and the impact of each on the efficiency of indexing system. Mention also various steps for evaluation.
  16. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1997) 0.01
    0.014084793 = product of:
      0.042254377 = sum of:
        0.042254377 = product of:
          0.084508754 = sum of:
            0.084508754 = weight(_text_:index in 576) [ClassicSimilarity], result of:
              0.084508754 = score(doc=576,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3862362 = fieldWeight in 576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=576)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  17. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1985) 0.01
    0.014084793 = product of:
      0.042254377 = sum of:
        0.042254377 = product of:
          0.084508754 = sum of:
            0.084508754 = weight(_text_:index in 3643) [ClassicSimilarity], result of:
              0.084508754 = score(doc=3643,freq=8.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3862362 = fieldWeight in 3643, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A landmark event in the twentieth-century development of subject analysis theory was a retrieval experiment, begun in 1957, by Cyril Cleverdon, Librarian of the Cranfield Institute of Technology. For this work he received the Professional Award of the Special Libraries Association in 1962 and the Award of Merit of the American Society for Information Science in 1970. The objective of the experiment, called Cranfield I, was to test the ability of four indexing systems-UDC, Facet, Uniterm, and Alphabetic-Subject Headings-to retrieve material responsive to questions addressed to a collection of documents. The experiment was ambitious in scale, consisting of eighteen thousand documents and twelve hundred questions. Prior to Cranfield I, the question of what constitutes good indexing was approached subjectively and reference was made to assumptions in the form of principles that should be observed or user needs that should be met. Cranfield I was the first large-scale effort to use objective criteria for determining the parameters of good indexing. Its creative impetus was the definition of user satisfaction in terms of precision and recall. Out of the experiment emerged the definition of recall as the percentage of relevant documents retrieved and precision as the percentage of retrieved documents that were relevant. Operationalizing the concept of user satisfaction, that is, making it measurable, meant that it could be studied empirically and manipulated as a variable in mathematical equations. Much has been made of the fact that the experimental methodology of Cranfield I was seriously flawed. This is unfortunate as it tends to diminish Cleverdon's contribu tion, which was not methodological-such contributions can be left to benchmark researchers-but rather creative: the introduction of a new paradigm, one that proved to be eminently productive. The criticism leveled at the methodological shortcomings of Cranfield I underscored the need for more precise definitions of the variables involved in information retrieval. Particularly important was the need for a definition of the dependent variable index language. Like the definitions of precision and recall, that of index language provided a new way of looking at the indexing process. It was a re-visioning that stimulated research activity and led not only to a better understanding of indexing but also the design of better retrieval systems." Cranfield I was followed by Cranfield II. While Cranfield I was a wholesale comparison of four indexing "systems," Cranfield II aimed to single out various individual factors in index languages, called "indexing devices," and to measure how variations in these affected retrieval performance. The following selection represents the thinking at Cranfield midway between these two notable retrieval experiments.
  18. Lancaster, F.W.: Evaluating the performance of a large computerized information system (1985) 0.01
    0.014084793 = product of:
      0.042254377 = sum of:
        0.042254377 = product of:
          0.084508754 = sum of:
            0.084508754 = weight(_text_:index in 3649) [ClassicSimilarity], result of:
              0.084508754 = score(doc=3649,freq=8.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.3862362 = fieldWeight in 3649, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3649)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    F. W. Lancaster is known for his writing an the state of the art in librarylinformation science. His skill in identifying significant contributions and synthesizing literature in fields as diverse as online systems, vocabulary control, measurement and evaluation, and the paperless society have earned him esteem as a chronicler of information science. Equally deserving of repute is his own contribution to research in the discipline-his evaluation of the MEDLARS operating system. The MEDLARS study is notable for several reasons. It was the first large-scale application of retrieval experiment methodology to the evaluation of an actual operating system. As such, problems had to be faced that do not arise in laboratory-like conditions. One example is the problem of recall: how to determine, for a very large and dynamic database, the number of documents relevant to a given search request. By solving this problem and others attendant upon transferring an experimental methodology to the real world, Lancaster created a constructive procedure that could be used to improve the design and functioning of retrieval systems. The MEDLARS study is notable also for its contribution to our understanding of what constitutes a good index language and good indexing. The ideal retrieval system would be one that retrieves all and only relevant documents. The failures that occur in real operating systems, when a relevant document is not retrieved (a recall failure) or an irrelevant document is retrieved (a precision failure), can be analysed to assess the impact of various factors an the performance of the system. This is exactly what Lancaster did. He found both the MEDLARS indexing and the McSH index language to be significant factors affecting retrieval performance. The indexing, primarily because it was insufficiently exhaustive, explained a large number of recall failures. The index language, largely because of its insufficient specificity, accounted for a large number of precision failures. The purpose of identifying factors responsible for a system's failures is ultimately to improve the system. Unlike many user studies, the MEDLARS evaluation yielded recommendations that were eventually implemented.* Indexing exhaustivity was increased and the McSH index language was enriched with more specific terms and a larger entry vocabulary.
  19. Allen, B.: Logical reasoning and retrieval performance (1993) 0.01
    0.012324194 = product of:
      0.036972582 = sum of:
        0.036972582 = product of:
          0.073945165 = sum of:
            0.073945165 = weight(_text_:index in 5093) [ClassicSimilarity], result of:
              0.073945165 = score(doc=5093,freq=2.0), product of:
                0.21880072 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.050071523 = queryNorm
                0.33795667 = fieldWeight in 5093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5093)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Tests the logical reasoning ability of end users of a CD-ROM index and assesses associations between different levels of this ability and aspects of retrieval performance. Users' selection of vocabulary and their selection of citations for further examination are both influenced by this ability. The designs of information systems should address the effects of logical reasoning on search behaviour. People with lower levels of logical reasoning ability may experience difficulty using systems in which user selectivity plays an important role. Other systems, such as those with ranked output, may decrease the need for users to make selections and would be easier to use for people with lower levels of logical reasoning ability
  20. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.01
    0.011306668 = product of:
      0.03392 = sum of:
        0.03392 = product of:
          0.06784 = sum of:
            0.06784 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.06784 = score(doc=3103,freq=2.0), product of:
                0.17534193 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050071523 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:55:22

Years

Languages

  • e 50
  • d 4
  • f 1
  • More… Less…

Types

  • a 49
  • s 5
  • m 3
  • el 1
  • r 1
  • More… Less…