Search (353 results, page 1 of 18)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  1. Munkelt, J.: Erstellung einer DNB-Retrieval-Testkollektion (2018) 0.03
    0.031998467 = product of:
      0.07999617 = sum of:
        0.04195383 = weight(_text_:und in 4310) [ClassicSimilarity], result of:
          0.04195383 = score(doc=4310,freq=12.0), product of:
            0.09991972 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.045082662 = queryNorm
            0.41987535 = fieldWeight in 4310, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4310)
        0.03804234 = product of:
          0.07608468 = sum of:
            0.07608468 = weight(_text_:dokumentation in 4310) [ClassicSimilarity], result of:
              0.07608468 = score(doc=4310,freq=2.0), product of:
                0.21059684 = queryWeight, product of:
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.045082662 = queryNorm
                0.36128122 = fieldWeight in 4310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.671349 = idf(docFreq=1124, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4310)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Seit Herbst 2017 findet in der Deutschen Nationalbibliothek die Inhaltserschließung bestimmter Medienwerke rein maschinell statt. Die Qualität dieses Verfahrens, das die Prozessorganisation von Bibliotheken maßgeblich prägen kann, wird unter Fachleuten kontrovers diskutiert. Ihre Standpunkte werden zunächst hinreichend erläutert, ehe die Notwendigkeit einer Qualitätsprüfung des Verfahrens und dessen Grundlagen dargelegt werden. Zentraler Bestandteil einer künftigen Prüfung ist eine Testkollektion. Ihre Erstellung und deren Dokumentation steht im Fokus dieser Arbeit. In diesem Zusammenhang werden auch die Entstehungsgeschichte und Anforderungen an gelungene Testkollektionen behandelt. Abschließend wird ein Retrievaltest durchgeführt, der die Einsatzfähigkeit der erarbeiteten Testkollektion belegt. Seine Ergebnisse dienen ausschließlich der Funktionsüberprüfung. Eine Qualitätsbeurteilung maschineller Inhaltserschließung im Speziellen sowie im Allgemeinen findet nicht statt und ist nicht Ziel der Ausarbeitung.
    Content
    Bachelorarbeit, Bibliothekswissenschaften, Fakultät für Informations- und Kommunikationswissenschaften, Technische Hochschule Köln
    Imprint
    Köln : Technische Hochschule, Fakultät für Informations- und Kommunikationswissenschaften
  2. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.03
    0.025698557 = product of:
      0.064246394 = sum of:
        0.02148985 = weight(_text_:information in 6438) [ClassicSimilarity], result of:
          0.02148985 = score(doc=6438,freq=2.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.27153665 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.042756546 = product of:
          0.08551309 = sum of:
            0.08551309 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08551309 = score(doc=6438,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    11. 8.2001 16:22:19
    Source
    Information processing and management. 36(2000) no.1, S.3-36
  3. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.03
    0.025698557 = product of:
      0.064246394 = sum of:
        0.02148985 = weight(_text_:information in 5089) [ClassicSimilarity], result of:
          0.02148985 = score(doc=5089,freq=2.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.27153665 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.042756546 = product of:
          0.08551309 = sum of:
            0.08551309 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.08551309 = score(doc=5089,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 7.2006 18:43:54
    Source
    Journal of the American Society for Information Science. 41(1990) no.4, S.272-281
  4. Ellis, D.: Progress and problems in information retrieval (1996) 0.02
    0.02450882 = product of:
      0.06127205 = sum of:
        0.03683974 = weight(_text_:information in 789) [ClassicSimilarity], result of:
          0.03683974 = score(doc=789,freq=18.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.46549135 = fieldWeight in 789, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.024432313 = product of:
          0.048864625 = sum of:
            0.048864625 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.048864625 = score(doc=789,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    An introduction to the principal generic approaches to information retrieval research with their associated concepts, models and systems, this text is designed to keep the information professional up to date with the major themes and developments that have preoccupied researchers in recent month in relation to textual and documentary retrieval systems.
    COMPASS
    Information retrieval
    Content
    First published 1991 as New horizons in information retrieval
    Date
    26. 7.2002 20:22:46
    Footnote
    Rez. in: Managing information 3(1996) no.10, S.49 (D. Bawden); Program 32(1998) no.2, S.190-192 (C. Revie)
    LCSH
    Information retrieval
    Subject
    Information retrieval
    Information retrieval
  5. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.022850875 = product of:
      0.057127185 = sum of:
        0.026586795 = weight(_text_:information in 2417) [ClassicSimilarity], result of:
          0.026586795 = score(doc=2417,freq=6.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.3359395 = fieldWeight in 2417, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
        0.030540392 = product of:
          0.061080784 = sum of:
            0.061080784 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.061080784 = score(doc=2417,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Pages
    S.22-25
    Series
    Proceedings of the American Society for Information Science; vol. 20
    Source
    Productivity in the information age : proceedings of the 46th ASIS annual meeting, 1983. Ed.: Raymond F Vondra
  6. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.02
    0.017147249 = product of:
      0.042868122 = sum of:
        0.02148985 = weight(_text_:information in 7302) [ClassicSimilarity], result of:
          0.02148985 = score(doc=7302,freq=8.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.27153665 = fieldWeight in 7302, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.021378273 = product of:
          0.042756546 = sum of:
            0.042756546 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.042756546 = score(doc=7302,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
    Source
    Information processing and management. 30(1994) no.2, S.205-221
  7. Sanderson, M.: ¬The Reuters test collection (1996) 0.02
    0.016719494 = product of:
      0.041798733 = sum of:
        0.01736642 = weight(_text_:information in 6971) [ClassicSimilarity], result of:
          0.01736642 = score(doc=6971,freq=4.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.21943474 = fieldWeight in 6971, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
        0.024432313 = product of:
          0.048864625 = sum of:
            0.048864625 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.048864625 = score(doc=6971,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  8. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.02
    0.016719494 = product of:
      0.041798733 = sum of:
        0.01736642 = weight(_text_:information in 3087) [ClassicSimilarity], result of:
          0.01736642 = score(doc=3087,freq=4.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.21943474 = fieldWeight in 3087, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.024432313 = product of:
          0.048864625 = sum of:
            0.048864625 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.048864625 = score(doc=3087,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  9. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.02
    0.015995612 = product of:
      0.039989028 = sum of:
        0.018610755 = weight(_text_:information in 3368) [ClassicSimilarity], result of:
          0.018610755 = score(doc=3368,freq=6.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.23515764 = fieldWeight in 3368, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.021378273 = product of:
          0.042756546 = sum of:
            0.042756546 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.042756546 = score(doc=3368,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.555-572
  10. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.015816202 = product of:
      0.039540507 = sum of:
        0.024270311 = weight(_text_:information in 2339) [ClassicSimilarity], result of:
          0.024270311 = score(doc=2339,freq=20.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.30666938 = fieldWeight in 2339, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.015270196 = product of:
          0.030540392 = sum of:
            0.030540392 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.030540392 = score(doc=2339,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  11. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.02
    0.01556731 = product of:
      0.038918275 = sum of:
        0.020594042 = weight(_text_:information in 6967) [ClassicSimilarity], result of:
          0.020594042 = score(doc=6967,freq=10.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.2602176 = fieldWeight in 6967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.018324234 = product of:
          0.036648467 = sum of:
            0.036648467 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.036648467 = score(doc=6967,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  12. Evans, J.E.: Some external and internal factors affecting users of interactive information systems (1996) 0.01
    0.014896169 = product of:
      0.037240423 = sum of:
        0.02255964 = weight(_text_:information in 6262) [ClassicSimilarity], result of:
          0.02255964 = score(doc=6262,freq=12.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.2850541 = fieldWeight in 6262, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6262)
        0.014680781 = weight(_text_:und in 6262) [ClassicSimilarity], result of:
          0.014680781 = score(doc=6262,freq=2.0), product of:
            0.09991972 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.045082662 = queryNorm
            0.14692576 = fieldWeight in 6262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=6262)
      0.4 = coord(2/5)
    
    Abstract
    This contribution reports the results of continuing research in human-information system interactions. Following training and experience with an electronic information retrieval system novice and experienced subject groups responded to questions ranking their value assessments of 7 attributes of information sources in relation to 15 factors describing the search process. In general, novice users were more heavily influenced by the process factors (negative influences) than by the positive attributes of information qualities. Experienced users, while still concerned with process factors, were more strongly influenced by the qualitative information attributes. The specific advantages and contributions of this research are several: higher dimensionality of measured factors and attributes (15 x 7); higher granularity of analysis using a 7 value metric in a closed-end Likert scale; development of bi-directional, firced-choice influence vectors; and a larger sample size (N=186) than previously reported in the literature
    Source
    Herausforderungen an die Informationswirtschaft: Informationsverdichtung, Informationsbewertung und Datenvisualisierung. Proceedings des 5. Internationalen Symposiums für Informationswissenschaft (ISI'96), Humboldt-Universität zu Berlin, 17.-19. Oktober 1996. Hrsg.: J. Krause u.a
  13. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.01
    0.014697641 = product of:
      0.036744103 = sum of:
        0.018419871 = weight(_text_:information in 3564) [ClassicSimilarity], result of:
          0.018419871 = score(doc=3564,freq=8.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.23274569 = fieldWeight in 3564, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3564)
        0.018324234 = product of:
          0.036648467 = sum of:
            0.036648467 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
              0.036648467 = score(doc=3564,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.23214069 = fieldWeight in 3564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3564)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
    Imprint
    Medford, New Jersey : Learned Information
    Source
    ASIS'89. Managing information and technology. Proceedings of the 52nd annual meeting of the American Society for Information Science, Washington D.C., 30.10.-2.11.1989. Vol.26. Ed.by J. Katzer and G.B. Newby
  14. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.01
    0.014629557 = product of:
      0.03657389 = sum of:
        0.015195617 = weight(_text_:information in 3002) [ClassicSimilarity], result of:
          0.015195617 = score(doc=3002,freq=4.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.1920054 = fieldWeight in 3002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3002)
        0.021378273 = product of:
          0.042756546 = sum of:
            0.042756546 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
              0.042756546 = score(doc=3002,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.2708308 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  15. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.01
    0.014629557 = product of:
      0.03657389 = sum of:
        0.015195617 = weight(_text_:information in 5001) [ClassicSimilarity], result of:
          0.015195617 = score(doc=5001,freq=4.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.1920054 = fieldWeight in 5001, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.021378273 = product of:
          0.042756546 = sum of:
            0.042756546 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.042756546 = score(doc=5001,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  16. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.01
    0.014230478 = product of:
      0.035576195 = sum of:
        0.020305997 = weight(_text_:information in 2026) [ClassicSimilarity], result of:
          0.020305997 = score(doc=2026,freq=14.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.256578 = fieldWeight in 2026, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.015270196 = product of:
          0.030540392 = sum of:
            0.030540392 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.030540392 = score(doc=2026,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  17. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.01
    0.013710524 = product of:
      0.03427631 = sum of:
        0.015952077 = weight(_text_:information in 4341) [ClassicSimilarity], result of:
          0.015952077 = score(doc=4341,freq=6.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.20156369 = fieldWeight in 4341, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4341)
        0.018324234 = product of:
          0.036648467 = sum of:
            0.036648467 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.036648467 = score(doc=4341,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  18. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.0136279585 = product of:
      0.034069896 = sum of:
        0.018799702 = weight(_text_:information in 1184) [ClassicSimilarity], result of:
          0.018799702 = score(doc=1184,freq=12.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.23754507 = fieldWeight in 1184, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.015270196 = product of:
          0.030540392 = sum of:
            0.030540392 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.030540392 = score(doc=1184,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  19. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.01
    0.012849279 = product of:
      0.032123197 = sum of:
        0.010744925 = weight(_text_:information in 2718) [ClassicSimilarity], result of:
          0.010744925 = score(doc=2718,freq=2.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.13576832 = fieldWeight in 2718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2718)
        0.021378273 = product of:
          0.042756546 = sum of:
            0.042756546 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.042756546 = score(doc=2718,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A study of the number of irrelevant documents retrieved in a co-ordinate indexing system that does not employ eitherr roles or links. These tests were based on one hundred actual inquiries received in the library and therefore an evaluation of recall efficiency is not included. Over half the enquiries produced no noise, but the mean average percentage niose figure was approximately 33 per cent based on a total average retireval figure of eighteen documents per search. Details of the size of the indexed collection, methods of indexing, and an analysis of the reasons for the retrieval of irrelevant documents are discussed, thereby providing information officers who are thinking of installing such a system with some evidence on which to base a decision as to whether or not to utilize these devices
    Source
    Journal of documentation. 22(1966), S.203-209
  20. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.01
    0.012849279 = product of:
      0.032123197 = sum of:
        0.010744925 = weight(_text_:information in 5598) [ClassicSimilarity], result of:
          0.010744925 = score(doc=5598,freq=2.0), product of:
            0.07914162 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.045082662 = queryNorm
            0.13576832 = fieldWeight in 5598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.021378273 = product of:
          0.042756546 = sum of:
            0.042756546 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.042756546 = score(doc=5598,freq=2.0), product of:
                0.1578718 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045082662 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    2.11.1996 13:08:22
    Source
    Library and information science research. 17(1995) no.4, S.347-385

Authors

Types

  • a 335
  • s 11
  • m 6
  • el 4
  • p 1
  • r 1
  • More… Less…