Search (63 results, page 1 of 4)

  • × theme_ss:"Retrievalstudien"
  1. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.09
    0.087716214 = product of:
      0.17543243 = sum of:
        0.17543243 = sum of:
          0.09183693 = weight(_text_:work in 3107) [ClassicSimilarity], result of:
            0.09183693 = score(doc=3107,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.40552467 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
          0.0835955 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
            0.0835955 = score(doc=3107,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.38690117 = fieldWeight in 3107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3107)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:59:22
  2. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.07
    0.070172966 = product of:
      0.14034593 = sum of:
        0.14034593 = sum of:
          0.07346954 = weight(_text_:work in 744) [ClassicSimilarity], result of:
            0.07346954 = score(doc=744,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.32441974 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
          0.0668764 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
            0.0668764 = score(doc=744,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.30952093 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
      0.5 = coord(1/2)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  3. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.05
    0.053368133 = product of:
      0.106736265 = sum of:
        0.106736265 = sum of:
          0.064938515 = weight(_text_:work in 1184) [ClassicSimilarity], result of:
            0.064938515 = score(doc=1184,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28674924 = fieldWeight in 1184, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.04179775 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.04179775 = score(doc=1184,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  4. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.04
    0.043858107 = product of:
      0.087716214 = sum of:
        0.087716214 = sum of:
          0.045918465 = weight(_text_:work in 2026) [ClassicSimilarity], result of:
            0.045918465 = score(doc=2026,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.04179775 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.04179775 = score(doc=2026,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  5. Borlund, P.: ¬A study of the use of simulated work task situations in interactive information retrieval evaluations : a meta-evaluation (2016) 0.03
    0.030458862 = product of:
      0.060917724 = sum of:
        0.060917724 = product of:
          0.12183545 = sum of:
            0.12183545 = weight(_text_:work in 2880) [ClassicSimilarity], result of:
              0.12183545 = score(doc=2880,freq=22.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.53798926 = fieldWeight in 2880, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2880)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement. Design/methodology/approach - In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis. Findings - The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants' personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies. Research limitations/implications - Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Originality/value - Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990's. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.
  6. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.03
    0.029258423 = product of:
      0.058516845 = sum of:
        0.058516845 = product of:
          0.11703369 = sum of:
            0.11703369 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.11703369 = score(doc=262,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20.10.2000 12:22:23
  7. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.03
    0.029258423 = product of:
      0.058516845 = sum of:
        0.058516845 = product of:
          0.11703369 = sum of:
            0.11703369 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.11703369 = score(doc=6418,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
  8. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.03
    0.029258423 = product of:
      0.058516845 = sum of:
        0.058516845 = product of:
          0.11703369 = sum of:
            0.11703369 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.11703369 = score(doc=6438,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
  9. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.03
    0.029258423 = product of:
      0.058516845 = sum of:
        0.058516845 = product of:
          0.11703369 = sum of:
            0.11703369 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.11703369 = score(doc=5089,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:43:54
  10. Borlund, P.: Experimental components for the evaluation of interactive information retrieval systems (2000) 0.03
    0.025669202 = product of:
      0.051338404 = sum of:
        0.051338404 = product of:
          0.10267681 = sum of:
            0.10267681 = weight(_text_:work in 4549) [ClassicSimilarity], result of:
              0.10267681 = score(doc=4549,freq=10.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.45339036 = fieldWeight in 4549, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental settings consists of 3 components: (1) the involvement of potential users as test persons; (2) the application of dynamic and individual information needs; and (3) the use of multidimensionsal and dynamic relevance judgements. Hidden under the information need component is the essential central sub-component, the simulated work task situation, the tool that triggers the (simulated) dynamic information need. This paper also reports on the empirical findings of the meta-evaluation of the application of this sub-component, the purpose of which is to discover whether the application of simulated work task situations to future evaluation of IIR systems can be recommended. Investigations are carried out to dertermine whether any search behavioural differences exist between test persons' treatment of their own real information needs versus simulated information needs. The hypothesis is that if no difference exist one can correctly substitute real information needs with simulated information needs through the application of simulated work task situations. The empirical results of the meta-evaluation provide positive evidence for the application of simulated work task situations to the evaluation of IIR systems. The results also indicate that tailoring work task situations to the group of test persons is important in motivating them. Furthermore, the results of the evaluation show that different versions of semantic openness of the simulated situations make no difference to the test persons' search treatment
  11. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.02
    0.022959232 = product of:
      0.045918465 = sum of:
        0.045918465 = product of:
          0.09183693 = sum of:
            0.09183693 = weight(_text_:work in 4393) [ClassicSimilarity], result of:
              0.09183693 = score(doc=4393,freq=8.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.40552467 = fieldWeight in 4393, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4393)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  12. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.02
    0.020898875 = product of:
      0.04179775 = sum of:
        0.04179775 = product of:
          0.0835955 = sum of:
            0.0835955 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.0835955 = score(doc=3103,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 2.1999 20:55:22
  13. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.020898875 = product of:
      0.04179775 = sum of:
        0.04179775 = product of:
          0.0835955 = sum of:
            0.0835955 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.0835955 = score(doc=2417,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.22-25
  14. Buckley, C.; Allan, J.; Salton, G.: Automatic routing and retrieval using Smart : TREC-2 (1995) 0.02
    0.019481555 = product of:
      0.03896311 = sum of:
        0.03896311 = product of:
          0.07792622 = sum of:
            0.07792622 = weight(_text_:work in 5699) [ClassicSimilarity], result of:
              0.07792622 = score(doc=5699,freq=4.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.3440991 = fieldWeight in 5699, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5699)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Smart information retrieval project emphazises completely automatic approaches to the understanding and retrieval of large quantities of text. The work in the TREC-2 environment continues, performing both routing and ad hoc experiments. The ad hoc work extends investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document that matches the query. The performance of ad hoc runs is good, but it is clear that full advantage of the available local information is not been taken advantage of. The routing experiments use conventional relevance feedback approaches to routing, but with a much greater degree of query expansion than was previously done. The length of a query vector is increased by a factor of 5 to 10 by adding terms found in previously seen relevant documents. This approach improves effectiveness by 30-40% over the original query
  15. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.02
    0.019481555 = product of:
      0.03896311 = sum of:
        0.03896311 = product of:
          0.07792622 = sum of:
            0.07792622 = weight(_text_:work in 2021) [ClassicSimilarity], result of:
              0.07792622 = score(doc=2021,freq=4.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.3440991 = fieldWeight in 2021, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  16. Munkelt, J.; Schaer, P.; Lepsky, K.: Towards an IR test collection for the German National Library (2018) 0.02
    0.019481555 = product of:
      0.03896311 = sum of:
        0.03896311 = product of:
          0.07792622 = sum of:
            0.07792622 = weight(_text_:work in 4311) [ClassicSimilarity], result of:
              0.07792622 = score(doc=4311,freq=4.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.3440991 = fieldWeight in 4311, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4311)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Automatic content indexing is one of the innovations that are increasingly changing the way libraries work. In theory, it promises a cataloguing service that would hardly be possible with humans in terms of speed, quantity and maybe quality. The German National Library (DNB) has also recognised this potential and is increasingly relying on the automatic indexing of their catalogue content. The DNB took a major step in this direction in 2017, which was announced in two papers. The announcement was rather restrained, but the content of the papers is all the more explosive for the library community: Since September 2017, the DNB has discontinued the intellectual indexing of series Band H and has switched to an automatic process for these series. The subject indexing of online publications (series O) has been purely automatical since 2010; from September 2017, monographs and periodicals published outside the publishing industry and university publications will no longer be indexed by people. This raises the question: What is the quality of the automatic indexing compared to the manual work or in other words to which degree can the automatic indexing replace people without a signi cant drop in regards to quality?
  17. Harman, D.: Overview of the Second Text Retrieval Conference : TREC-2 (1995) 0.02
    0.018367385 = product of:
      0.03673477 = sum of:
        0.03673477 = product of:
          0.07346954 = sum of:
            0.07346954 = weight(_text_:work in 1915) [ClassicSimilarity], result of:
              0.07346954 = score(doc=1915,freq=2.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.32441974 = fieldWeight in 1915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1915)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The conference was attended by about 150 people involved in 31 participating groups. Its goal was to bring research groups together to discuss their work on a new large test collection. There was a large variation of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences between the systems affected performance
  18. Gilchrist, A.: Research and consultancy (1998) 0.02
    0.018367385 = product of:
      0.03673477 = sum of:
        0.03673477 = product of:
          0.07346954 = sum of:
            0.07346954 = weight(_text_:work in 1394) [ClassicSimilarity], result of:
              0.07346954 = score(doc=1394,freq=2.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.32441974 = fieldWeight in 1394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1394)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library and information work worldwide 1998. Ed.: M.B. Line et al
  19. Gillman, P.: Text retrieval (1998) 0.02
    0.018367385 = product of:
      0.03673477 = sum of:
        0.03673477 = product of:
          0.07346954 = sum of:
            0.07346954 = weight(_text_:work in 1502) [ClassicSimilarity], result of:
              0.07346954 = score(doc=1502,freq=2.0), product of:
                0.22646447 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.061700378 = queryNorm
                0.32441974 = fieldWeight in 1502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades
  20. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.02
    0.0167191 = product of:
      0.0334382 = sum of:
        0.0334382 = product of:
          0.0668764 = sum of:
            0.0668764 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.0668764 = score(doc=5002,freq=2.0), product of:
                0.21606421 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.061700378 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    19. 3.1996 11:22:12

Languages

  • e 58
  • d 3
  • f 1
  • More… Less…

Types

  • a 58
  • s 3
  • m 2
  • el 1
  • More… Less…