Search (174 results, page 1 of 9)

  • × theme_ss:"Retrievalstudien"
  • × year_i:[1990 TO 2000}
  1. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1996) 0.02
    0.019949345 = product of:
      0.15959476 = sum of:
        0.14549044 = weight(_text_:515 in 6607) [ClassicSimilarity], result of:
          0.14549044 = score(doc=6607,freq=2.0), product of:
            0.22183119 = queryWeight, product of:
              7.4202213 = idf(docFreq=71, maxDocs=44218)
              0.029895496 = queryNorm
            0.6558611 = fieldWeight in 6607, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.4202213 = idf(docFreq=71, maxDocs=44218)
              0.0625 = fieldNorm(doc=6607)
        0.014104321 = weight(_text_:information in 6607) [ClassicSimilarity], result of:
          0.014104321 = score(doc=6607,freq=6.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.2687516 = fieldWeight in 6607, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6607)
      0.125 = coord(2/16)
    
    Abstract
    Test sets for the document retrieval task composed of human relevance judgments have been constructed that allow one to compare human performance directly with that of automatic methods and that place absolute limits on performance by any method. Current retrieval systems are found to generate only about half of the information allowed by these absolute limits. The data suggests that most of the improvement that could be achieved consistent with these limits can only be achieved by incorporating specific subject information into retrieval systems
    Source
    Information processing and management. 32(1996) no.5, S.515-527
  2. Hofmann, M.: TREC Konferenzbericht (7.10.93) (1995) 0.01
    0.012627005 = product of:
      0.10101604 = sum of:
        0.08236979 = weight(_text_:anwendungen in 5005) [ClassicSimilarity], result of:
          0.08236979 = score(doc=5005,freq=2.0), product of:
            0.16691269 = queryWeight, product of:
              5.583205 = idf(docFreq=451, maxDocs=44218)
              0.029895496 = queryNorm
            0.49349028 = fieldWeight in 5005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.583205 = idf(docFreq=451, maxDocs=44218)
              0.0625 = fieldNorm(doc=5005)
        0.018646248 = weight(_text_:der in 5005) [ClassicSimilarity], result of:
          0.018646248 = score(doc=5005,freq=4.0), product of:
            0.06677957 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.029895496 = queryNorm
            0.27922085 = fieldWeight in 5005, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=5005)
      0.125 = coord(2/16)
    
    Abstract
    Ziel der US-amerikanischen TREC (Texte REtrieval Conference) Initiative ist es, zum einen Standard-Kollektionen von Teten zu erstellen, auf deren Basis verschiedene Retrieval-Ansätze verglichen werden können. Zum anderen sollen Kollektionen einer Grösse aufgebaut werden, die realistischen Anwendungen nahe kommen. Damit soll das alte Vorurteil der Untauglichkeit neuerer IR-Methoden (bzgl. Effizienz und Effektivität) für größere Darenbasen widerlegt werden.
  3. Frei, H.P.; Meienberg, S.; Schäuble, P.: ¬The perils of interpreting recall and precision values (1991) 0.01
    0.010362523 = product of:
      0.08290018 = sum of:
        0.06879586 = weight(_text_:informatik in 786) [ClassicSimilarity], result of:
          0.06879586 = score(doc=786,freq=2.0), product of:
            0.15254098 = queryWeight, product of:
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.029895496 = queryNorm
            0.4509992 = fieldWeight in 786, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1024737 = idf(docFreq=730, maxDocs=44218)
              0.0625 = fieldNorm(doc=786)
        0.014104321 = weight(_text_:information in 786) [ClassicSimilarity], result of:
          0.014104321 = score(doc=786,freq=6.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.2687516 = fieldWeight in 786, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=786)
      0.125 = coord(2/16)
    
    Abstract
    The traditional recall and precision measure is inappropriate when retrieval algorithms that retrieve information from Wide Area Networks are evaluated. The principle reason is that information available in WANs is dynamic and its size os orders of magnitude greater than the size of the usual test collections. To overcome these problems, a new efffectiveness measure has been developed, which we call the 'usefulness measure'
    Series
    Informatik-Fachberichte; 289
    Source
    Information retrieval: GI/GMD-Workshop, Darmstadt, 23.-24.6.1991: Proceedings. Ed.: N. Fuhr
  4. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.005325434 = product of:
      0.04260347 = sum of:
        0.014250483 = weight(_text_:information in 5089) [ClassicSimilarity], result of:
          0.014250483 = score(doc=5089,freq=2.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.27153665 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.028352989 = product of:
          0.056705978 = sum of:
            0.056705978 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.056705978 = score(doc=5089,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Date
    22. 7.2006 18:43:54
    Source
    Journal of the American Society for Information Science. 41(1990) no.4, S.272-281
  5. Ellis, D.: Progress and problems in information retrieval (1996) 0.01
    0.0050788885 = product of:
      0.040631108 = sum of:
        0.024429398 = weight(_text_:information in 789) [ClassicSimilarity], result of:
          0.024429398 = score(doc=789,freq=18.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.46549135 = fieldWeight in 789, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.016201708 = product of:
          0.032403417 = sum of:
            0.032403417 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.032403417 = score(doc=789,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    An introduction to the principal generic approaches to information retrieval research with their associated concepts, models and systems, this text is designed to keep the information professional up to date with the major themes and developments that have preoccupied researchers in recent month in relation to textual and documentary retrieval systems.
    COMPASS
    Information retrieval
    Content
    First published 1991 as New horizons in information retrieval
    Date
    26. 7.2002 20:22:46
    Footnote
    Rez. in: Managing information 3(1996) no.10, S.49 (D. Bawden); Program 32(1998) no.2, S.190-192 (C. Revie)
    LCSH
    Information retrieval
    Subject
    Information retrieval
    Information retrieval
  6. Chen, H.; Martinez, J.; Kirchhoff, A.; Ng, T.D.; Schatz, B.R.: Alleviating search uncertainty through concept associations : automatic indexing, co-occurence analysis, and parallel computing (1998) 0.00
    0.0046548294 = product of:
      0.037238635 = sum of:
        0.008637097 = weight(_text_:information in 5202) [ClassicSimilarity], result of:
          0.008637097 = score(doc=5202,freq=4.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.16457605 = fieldWeight in 5202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5202)
        0.028601538 = product of:
          0.057203077 = sum of:
            0.057203077 = weight(_text_:engineering in 5202) [ClassicSimilarity], result of:
              0.057203077 = score(doc=5202,freq=2.0), product of:
                0.16061439 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.029895496 = queryNorm
                0.35615164 = fieldWeight in 5202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5202)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    In this article, we report research on an algorithmic approach to alleviating search uncertainty in a large information space. Grounded on object filtering, automatic indexing, and co-occurence analysis, we performed a large-scale experiment using a parallel supercomputer (SGI Power Challenge) to analyze 400.000+ abstracts in an INSPEC computer engineering collection. Two system-generated thesauri, one based on a combined object filtering and automatic indexing method, and the other based on automatic indexing only, were compaed with the human-generated INSPEC subject thesaurus. Our user evaluation revealed that the system-generated thesauri were better than the INSPEC thesaurus in 'concept recall', but in 'concept precision' the 3 thesauri were comparable. Our analysis also revealed that the terms suggested by the 3 thesauri were complementary and could be used to significantly increase 'variety' in search terms the thereby reduce search uncertainty
    Source
    Journal of the American Society for Information Science. 49(1998) no.3, S.206-216
  7. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.00
    0.0035533723 = product of:
      0.028426979 = sum of:
        0.014250483 = weight(_text_:information in 7302) [ClassicSimilarity], result of:
          0.014250483 = score(doc=7302,freq=8.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.27153665 = fieldWeight in 7302, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.014176494 = product of:
          0.028352989 = sum of:
            0.028352989 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.028352989 = score(doc=7302,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
    Source
    Information processing and management. 30(1994) no.2, S.205-221
  8. Sanderson, M.: ¬The Reuters test collection (1996) 0.00
    0.0034647295 = product of:
      0.027717836 = sum of:
        0.011516129 = weight(_text_:information in 6971) [ClassicSimilarity], result of:
          0.011516129 = score(doc=6971,freq=4.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.21943474 = fieldWeight in 6971, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
        0.016201708 = product of:
          0.032403417 = sum of:
            0.032403417 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.032403417 = score(doc=6971,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  9. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.00
    0.0034647295 = product of:
      0.027717836 = sum of:
        0.011516129 = weight(_text_:information in 744) [ClassicSimilarity], result of:
          0.011516129 = score(doc=744,freq=4.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.21943474 = fieldWeight in 744, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.016201708 = product of:
          0.032403417 = sum of:
            0.032403417 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.032403417 = score(doc=744,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  10. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.00
    0.0034647295 = product of:
      0.027717836 = sum of:
        0.011516129 = weight(_text_:information in 3087) [ClassicSimilarity], result of:
          0.011516129 = score(doc=3087,freq=4.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.21943474 = fieldWeight in 3087, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.016201708 = product of:
          0.032403417 = sum of:
            0.032403417 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.032403417 = score(doc=3087,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  11. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.00
    0.0033147219 = product of:
      0.026517775 = sum of:
        0.0123412805 = weight(_text_:information in 3368) [ClassicSimilarity], result of:
          0.0123412805 = score(doc=3368,freq=6.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.23515764 = fieldWeight in 3368, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.014176494 = product of:
          0.028352989 = sum of:
            0.028352989 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.028352989 = score(doc=3368,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.555-572
  12. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.00
    0.0032775435 = product of:
      0.026220348 = sum of:
        0.01609428 = weight(_text_:information in 2339) [ClassicSimilarity], result of:
          0.01609428 = score(doc=2339,freq=20.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.30666938 = fieldWeight in 2339, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.010126068 = product of:
          0.020252137 = sum of:
            0.020252137 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.020252137 = score(doc=2339,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  13. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.00
    0.0032259664 = product of:
      0.02580773 = sum of:
        0.01365645 = weight(_text_:information in 6967) [ClassicSimilarity], result of:
          0.01365645 = score(doc=6967,freq=10.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.2602176 = fieldWeight in 6967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.01215128 = product of:
          0.02430256 = sum of:
            0.02430256 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.02430256 = score(doc=6967,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  14. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.00
    0.0030316385 = product of:
      0.024253108 = sum of:
        0.010076613 = weight(_text_:information in 3002) [ClassicSimilarity], result of:
          0.010076613 = score(doc=3002,freq=4.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.1920054 = fieldWeight in 3002, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3002)
        0.014176494 = product of:
          0.028352989 = sum of:
            0.028352989 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
              0.028352989 = score(doc=3002,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.2708308 = fieldWeight in 3002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3002)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  15. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.00
    0.00284119 = product of:
      0.02272952 = sum of:
        0.01057824 = weight(_text_:information in 4341) [ClassicSimilarity], result of:
          0.01057824 = score(doc=4341,freq=6.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.20156369 = fieldWeight in 4341, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4341)
        0.01215128 = product of:
          0.02430256 = sum of:
            0.02430256 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.02430256 = score(doc=4341,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    Undergraduates were tested to establish how they searched databases, the effectiveness of their searches and their satisfaction with them. The students' cognitive and learning styles were determined by the Lancaster Approaches to Studying Inventory and Riding's Cognitive Styles Analysis tests. There were significant differences in the searching behaviour and the effectiveness of the searches carried out by students with different learning and cognitive styles. Computer-assisted learning (CAL) packages were developed for three departments. The effectiveness of the packages were evaluated. Significant differences were found in the ways students with different learning styles used the packages. Based on the experience gained, guidelines for the teaching of information skills and the production and use of packages were prepared. About 2/3 of the searches had serious weaknesses, indicating a need for effective training. It appears that choice of searching strategies, search effectiveness and use of CAL packages are all affected by the cognitive and learning styles of the searcher. Therefore, students should be made aware of their own styles and, if appropriate, how to adopt more effective strategies
    Source
    Journal of information science. 22(1996) no.2, S.79-92
  16. Knorz, G.: Testverfahren für intelligente Indexierungs- und Retrievalsysteme anhand deutsch-sprachiger sozialwissenschaftlicher Fachinformation (GIRT) : Bericht über einen Workshop am 12. September 1997 im IZ Sozialwissenschaften, Bonn (1998) 0.00
    0.0026660026 = product of:
      0.02132802 = sum of:
        0.013184888 = weight(_text_:der in 5080) [ClassicSimilarity], result of:
          0.013184888 = score(doc=5080,freq=2.0), product of:
            0.06677957 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.029895496 = queryNorm
            0.19743896 = fieldWeight in 5080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=5080)
        0.0081431335 = weight(_text_:information in 5080) [ClassicSimilarity], result of:
          0.0081431335 = score(doc=5080,freq=2.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.1551638 = fieldWeight in 5080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5080)
      0.125 = coord(2/16)
    
    Content
    A. Die Initiative "GIRT" 1. Vorträge 2. Ziele und Perspektiven des Projektes GIRT (Krause) 3. Generelle Ergebnisse der TREC-Studien, einschließlich TREC-5 (Womser-Hacker) 4. Ergebnisse des GIRT-Pretests (Kluck) 5. Multilingualität in TREC (Schäuble) B. Abschlußdiskussion und Resumee
    Source
    nfd Information - Wissenschaft und Praxis. 49(1998) H.2, S.111-116
  17. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.00
    0.002662717 = product of:
      0.021301735 = sum of:
        0.0071252417 = weight(_text_:information in 5598) [ClassicSimilarity], result of:
          0.0071252417 = score(doc=5598,freq=2.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.13576832 = fieldWeight in 5598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5598)
        0.014176494 = product of:
          0.028352989 = sum of:
            0.028352989 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.028352989 = score(doc=5598,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Date
    2.11.1996 13:08:22
    Source
    Library and information science research. 17(1995) no.4, S.347-385
  18. Van der Walt, H.E.A.; Brakel, P.A. van: Method for the evaluation of the retrieval effectiveness of a CD-ROM bibliographic database (1991) 0.00
    0.0023327523 = product of:
      0.018662019 = sum of:
        0.011536777 = weight(_text_:der in 3114) [ClassicSimilarity], result of:
          0.011536777 = score(doc=3114,freq=2.0), product of:
            0.06677957 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.029895496 = queryNorm
            0.17275909 = fieldWeight in 3114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3114)
        0.0071252417 = weight(_text_:information in 3114) [ClassicSimilarity], result of:
          0.0071252417 = score(doc=3114,freq=2.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.13576832 = fieldWeight in 3114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3114)
      0.125 = coord(2/16)
    
    Source
    African journal of library and information science. 59(1991) no.1, S.32-42
  19. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.00
    0.002282329 = product of:
      0.018258631 = sum of:
        0.00610735 = weight(_text_:information in 1757) [ClassicSimilarity], result of:
          0.00610735 = score(doc=1757,freq=2.0), product of:
            0.052480884 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029895496 = queryNorm
            0.116372846 = fieldWeight in 1757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1757)
        0.01215128 = product of:
          0.02430256 = sum of:
            0.02430256 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.02430256 = score(doc=1757,freq=2.0), product of:
                0.104688935 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029895496 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.125 = coord(2/16)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
    Source
    Information processing and management. 31(1995) no.2, S.173-190
  20. Krause, J.; Womser-Hacker, C.: PADOK-II : Retrievaltests zur Bewertung von Volltextindexierungsvarianten für das deutsche Patentinformationssystem (1990) 0.00
    0.002180246 = product of:
      0.034883935 = sum of:
        0.034883935 = weight(_text_:der in 2653) [ClassicSimilarity], result of:
          0.034883935 = score(doc=2653,freq=14.0), product of:
            0.06677957 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.029895496 = queryNorm
            0.5223744 = fieldWeight in 2653, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=2653)
      0.0625 = coord(1/16)
    
    Abstract
    Vorgestellt werden die Ergebnisse extensiver Retrievaltests von zwei Varianten von Inhalteserschließungen (Freitext und PASSAT) für das deutsche Patentinformationssystem auf der Basis von Volltexten. Die Tests führte die Fachgruppe Linguistische Informationswissenschaft der Universität Regensburg von 1986-1989 in Zusammenarbeit mit dem Deutschen Patentamt, dem Fachinformationszentrum Karlsruhe und meheren industrieellen Partnern durch. Der Schwerpunkt des Berichts liegt auf dem allgemeinen Ansatz der Bewertung der Ziele des Projekts und auf der Darstellung der statistischen Evaluierungsergebnisse.

Languages

Types

  • a 160
  • s 6
  • m 3
  • r 3
  • el 2
  • x 1
  • More… Less…