Search (67 results, page 1 of 4)

  • × theme_ss:"Retrievalstudien"
  1. Reichert, S.; Mayr, P.: Untersuchung von Relevanzeigenschaften in einem kontrollierten Eyetracking-Experiment (2012) 0.03
    0.027939359 = product of:
      0.111757435 = sum of:
        0.09600681 = weight(_text_:hochschule in 328) [ClassicSimilarity], result of:
          0.09600681 = score(doc=328,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.40526438 = fieldWeight in 328, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.046875 = fieldNorm(doc=328)
        0.015750622 = product of:
          0.031501245 = sum of:
            0.031501245 = weight(_text_:22 in 328) [ClassicSimilarity], result of:
              0.031501245 = score(doc=328,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.23214069 = fieldWeight in 328, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In diesem Artikel wird ein Eyetracking-Experiment beschrieben, bei dem untersucht wurde, wann und auf Basis welcher Informationen Relevanzentscheidungen bei der themenbezogenen Dokumentenbewertung fallen und welche Faktoren auf die Relevanzentscheidung einwirken. Nach einer kurzen Einführung werden relevante Studien aufgeführt, in denen Eyetracking als Untersuchungsmethode für Interaktionsverhalten mit Ergebnislisten (Information Seeking Behavior) verwendet wurde. Nutzerverhalten wird hierbei vor allem durch unterschiedliche Aufgaben-Typen, dargestellte Informationen und durch das Ranking eines Ergebnisses beeinflusst. Durch EyetrackingUntersuchungen lassen sich Nutzer außerdem in verschiedene Klassen von Bewertungs- und Lesetypen einordnen. Diese Informationen können als implizites Feedback genutzt werden, um so die Suche zu personalisieren und um die Relevanz von Suchergebnissen ohne aktives Zutun des Users zu erhöhen. In einem explorativen Eyetracking-Experiment mit 12 Studenten der Hochschule Darmstadt werden anhand der Länge der Gesamtbewertung, Anzahl der Fixationen, Anzahl der besuchten Metadatenelemente und Länge des Scanpfades zwei typische Bewertungstypen identifiziert. Das Metadatenfeld Abstract wird im Experiment zuverlässig als wichtigste Dokumenteigenschaft für die Zuordnung von Relevanz ermittelt.
    Date
    22. 7.2012 19:25:54
  2. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.022252686 = product of:
      0.089010745 = sum of:
        0.07588523 = weight(_text_:cooperative in 2339) [ClassicSimilarity], result of:
          0.07588523 = score(doc=2339,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.32890874 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.02625104 = score(doc=2339,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  3. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.02
    0.020982286 = product of:
      0.083929144 = sum of:
        0.057678103 = weight(_text_:work in 3107) [ClassicSimilarity], result of:
          0.057678103 = score(doc=3107,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.40552467 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
        0.02625104 = product of:
          0.05250208 = sum of:
            0.05250208 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.05250208 = score(doc=3107,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    27. 2.1999 20:59:22
  4. Munkelt, J.: Erstellung einer DNB-Retrieval-Testkollektion (2018) 0.02
    0.019800395 = product of:
      0.15840316 = sum of:
        0.15840316 = weight(_text_:hochschule in 4310) [ClassicSimilarity], result of:
          0.15840316 = score(doc=4310,freq=4.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.6686521 = fieldWeight in 4310, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4310)
      0.125 = coord(1/8)
    
    Content
    Bachelorarbeit, Bibliothekswissenschaften, Fakultät für Informations- und Kommunikationswissenschaften, Technische Hochschule Köln
    Imprint
    Köln : Technische Hochschule, Fakultät für Informations- und Kommunikationswissenschaften
  5. TREC: experiment and evaluation in information retrieval (2005) 0.02
    0.017530624 = product of:
      0.070122495 = sum of:
        0.037541576 = weight(_text_:supported in 636) [ClassicSimilarity], result of:
          0.037541576 = score(doc=636,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.16358295 = fieldWeight in 636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.01953125 = fieldNorm(doc=636)
        0.032580916 = product of:
          0.06516183 = sum of:
            0.06516183 = weight(_text_:aufsatzsammlung in 636) [ClassicSimilarity], result of:
              0.06516183 = score(doc=636,freq=4.0), product of:
                0.25424787 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03875087 = queryNorm
                0.25629252 = fieldWeight in 636, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Footnote
    Rez. in: JASIST 58(2007) no.6, S.910-911 (J.L. Vicedo u. J. Gomez): "The Text REtrieval Conference (TREC) is a yearly workshop hosted by the U.S. government's National Institute of Standards and Technology (NIST) that fosters and supports research in information retrieval as well as speeding the transfer of technology between research labs and industry. Since 1992, TREC has provided the infrastructure necessary for large-scale evaluations of different text retrieval methodologies. TREC impact has been very important and its success has been mainly supported by its continuous adaptation to the emerging information retrieval needs. Not in vain, TREC has built evaluation benchmarks for more than 20 different retrieval problems such as Web retrieval, speech retrieval, or question-answering. The large and intense trajectory of annual TREC conferences has resulted in an immense bulk of documents reflecting the different eval uation and research efforts developed. This situation makes it difficult sometimes to observe clearly how research in information retrieval (IR) has evolved over the course of TREC. TREC: Experiment and Evaluation in Information Retrieval succeeds in organizing and condensing all this research into a manageable volume that describes TREC history and summarizes the main lessons learned. The book is organized into three parts. The first part is devoted to the description of TREC's origin and history, the test collections, and the evaluation methodology developed. The second part describes a selection of the major evaluation exercises (tracks), and the third part contains contributions from research groups that had a large and remarkable participation in TREC. Finally, Karen Spark Jones, one of the main promoters of research in IR, closes the book with an epilogue that analyzes the impact of TREC on this research field.
    RSWK
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
    Subject
    Information Retrieval / Textverarbeitung / Aufsatzsammlung (BVB)
  6. Meyer, O.C.: Retrievalexperimente mit bibliothekarischen Daten : Historischer Überblick und aktueller Forschungsstand (2022) 0.02
    0.016971767 = product of:
      0.13577414 = sum of:
        0.13577414 = weight(_text_:hochschule in 655) [ClassicSimilarity], result of:
          0.13577414 = score(doc=655,freq=4.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.57313037 = fieldWeight in 655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.046875 = fieldNorm(doc=655)
      0.125 = coord(1/8)
    
    Footnote
    Bachelorarbeit zur Erlangung des Bachelor-Grades Bachelor of Arts im Studiengang Bibliothekswissenschaft an der Fakultät für Informationswissenschaft der Technischen Hochschule Köln.
    Imprint
    Köln : Technische Hochschule / Fakultät für Informationswissenschaft
  7. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.02
    0.016785828 = product of:
      0.06714331 = sum of:
        0.04614248 = weight(_text_:work in 744) [ClassicSimilarity], result of:
          0.04614248 = score(doc=744,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.32441974 = fieldWeight in 744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.021000832 = product of:
          0.042001665 = sum of:
            0.042001665 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.042001665 = score(doc=744,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  8. Keen, E.M.; Hartley, R.J.: Phrase processing in text retrieval (1994) 0.02
    0.01501663 = product of:
      0.12013304 = sum of:
        0.12013304 = weight(_text_:supported in 1316) [ClassicSimilarity], result of:
          0.12013304 = score(doc=1316,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.52346545 = fieldWeight in 1316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0625 = fieldNorm(doc=1316)
      0.125 = coord(1/8)
    
    Abstract
    After introducing types of records, queries and text processing options, the features needed in software for phrase processing are identified and different approaches in current text retrieval research in the Text Retrieval Conference (TREC) projects are enumerated. Then follow eight observations on issues in phrase searching relating both to practice and to research, giving the authors' selection of crucial and controversial issues, supported by 21 references
  9. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.013477524 = product of:
      0.053910095 = sum of:
        0.040784575 = weight(_text_:work in 1184) [ClassicSimilarity], result of:
          0.040784575 = score(doc=1184,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28674924 = fieldWeight in 1184, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.02625104 = score(doc=1184,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  10. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.01
    0.010491143 = product of:
      0.041964572 = sum of:
        0.028839052 = weight(_text_:work in 2026) [ClassicSimilarity], result of:
          0.028839052 = score(doc=2026,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 2026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2026)
        0.01312552 = product of:
          0.02625104 = sum of:
            0.02625104 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.02625104 = score(doc=2026,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  11. Günther, M.: Vermitteln Suchmaschinen vollständige Bilder aktueller Themen? : Untersuchung der Gewichtung inhaltlicher Aspekte von Suchmaschinenergebnissen in Deutschland und den USA (2016) 0.01
    0.01000071 = product of:
      0.08000568 = sum of:
        0.08000568 = weight(_text_:hochschule in 3068) [ClassicSimilarity], result of:
          0.08000568 = score(doc=3068,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.33772033 = fieldWeight in 3068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3068)
      0.125 = coord(1/8)
    
    Content
    Vgl.: https://yis.univie.ac.at/index.php/yis/article/view/1355. Diesem Beitrag liegt folgende Abschlussarbeit zugrunde: Günther, Markus: Welches Weltbild vermitteln Suchmaschinen? Untersuchung der Gewichtung inhaltlicher Aspekte von Google- und Bing-Ergebnissen in Deutschland und den USA zu aktuellen internationalen Themen . Masterarbeit (M.A.), Hochschule für Angewandte Wissenschaften Hamburg, 2015. Volltext: http://edoc.sub.uni-hamburg.de/haw/volltexte/2016/332.
  12. Borlund, P.: ¬A study of the use of simulated work task situations in interactive information retrieval evaluations : a meta-evaluation (2016) 0.01
    0.009564831 = product of:
      0.07651865 = sum of:
        0.07651865 = weight(_text_:work in 2880) [ClassicSimilarity], result of:
          0.07651865 = score(doc=2880,freq=22.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.53798926 = fieldWeight in 2880, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2880)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement. Design/methodology/approach - In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis. Findings - The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants' personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies. Research limitations/implications - Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Originality/value - Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990's. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.
  13. Borlund, P.: Experimental components for the evaluation of interactive information retrieval systems (2000) 0.01
    0.00806076 = product of:
      0.06448608 = sum of:
        0.06448608 = weight(_text_:work in 4549) [ClassicSimilarity], result of:
          0.06448608 = score(doc=4549,freq=10.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.45339036 = fieldWeight in 4549, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4549)
      0.125 = coord(1/8)
    
    Abstract
    This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental settings consists of 3 components: (1) the involvement of potential users as test persons; (2) the application of dynamic and individual information needs; and (3) the use of multidimensionsal and dynamic relevance judgements. Hidden under the information need component is the essential central sub-component, the simulated work task situation, the tool that triggers the (simulated) dynamic information need. This paper also reports on the empirical findings of the meta-evaluation of the application of this sub-component, the purpose of which is to discover whether the application of simulated work task situations to future evaluation of IIR systems can be recommended. Investigations are carried out to dertermine whether any search behavioural differences exist between test persons' treatment of their own real information needs versus simulated information needs. The hypothesis is that if no difference exist one can correctly substitute real information needs with simulated information needs through the application of simulated work task situations. The empirical results of the meta-evaluation provide positive evidence for the application of simulated work task situations to the evaluation of IIR systems. The results also indicate that tailoring work task situations to the group of test persons is important in motivating them. Furthermore, the results of the evaluation show that different versions of semantic openness of the simulated situations make no difference to the test persons' search treatment
  14. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.01
    0.007209763 = product of:
      0.057678103 = sum of:
        0.057678103 = weight(_text_:work in 4393) [ClassicSimilarity], result of:
          0.057678103 = score(doc=4393,freq=8.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.40552467 = fieldWeight in 4393, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  15. Buckley, C.; Allan, J.; Salton, G.: Automatic routing and retrieval using Smart : TREC-2 (1995) 0.01
    0.0061176866 = product of:
      0.048941493 = sum of:
        0.048941493 = weight(_text_:work in 5699) [ClassicSimilarity], result of:
          0.048941493 = score(doc=5699,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 5699, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=5699)
      0.125 = coord(1/8)
    
    Abstract
    The Smart information retrieval project emphazises completely automatic approaches to the understanding and retrieval of large quantities of text. The work in the TREC-2 environment continues, performing both routing and ad hoc experiments. The ad hoc work extends investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document that matches the query. The performance of ad hoc runs is good, but it is clear that full advantage of the available local information is not been taken advantage of. The routing experiments use conventional relevance feedback approaches to routing, but with a much greater degree of query expansion than was previously done. The length of a query vector is increased by a factor of 5 to 10 by adding terms found in previously seen relevant documents. This approach improves effectiveness by 30-40% over the original query
  16. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.01
    0.0061176866 = product of:
      0.048941493 = sum of:
        0.048941493 = weight(_text_:work in 2021) [ClassicSimilarity], result of:
          0.048941493 = score(doc=2021,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 2021, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.125 = coord(1/8)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  17. Munkelt, J.; Schaer, P.; Lepsky, K.: Towards an IR test collection for the German National Library (2018) 0.01
    0.0061176866 = product of:
      0.048941493 = sum of:
        0.048941493 = weight(_text_:work in 4311) [ClassicSimilarity], result of:
          0.048941493 = score(doc=4311,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 4311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=4311)
      0.125 = coord(1/8)
    
    Abstract
    Automatic content indexing is one of the innovations that are increasingly changing the way libraries work. In theory, it promises a cataloguing service that would hardly be possible with humans in terms of speed, quantity and maybe quality. The German National Library (DNB) has also recognised this potential and is increasingly relying on the automatic indexing of their catalogue content. The DNB took a major step in this direction in 2017, which was announced in two papers. The announcement was rather restrained, but the content of the papers is all the more explosive for the library community: Since September 2017, the DNB has discontinued the intellectual indexing of series Band H and has switched to an automatic process for these series. The subject indexing of online publications (series O) has been purely automatical since 2010; from September 2017, monographs and periodicals published outside the publishing industry and university publications will no longer be indexed by people. This raises the question: What is the quality of the automatic indexing compared to the manual work or in other words to which degree can the automatic indexing replace people without a signi cant drop in regards to quality?
  18. Harman, D.: Overview of the Second Text Retrieval Conference : TREC-2 (1995) 0.01
    0.00576781 = product of:
      0.04614248 = sum of:
        0.04614248 = weight(_text_:work in 1915) [ClassicSimilarity], result of:
          0.04614248 = score(doc=1915,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.32441974 = fieldWeight in 1915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0625 = fieldNorm(doc=1915)
      0.125 = coord(1/8)
    
    Abstract
    The conference was attended by about 150 people involved in 31 participating groups. Its goal was to bring research groups together to discuss their work on a new large test collection. There was a large variation of retrieval techniques reported on, including methods using automatic thesauri, sophisticated term weighting, natural language techniques, relevance feedback, and advanced pattern matching. As results had been run through a common evaluation package, groups were able to compare the effectiveness of different techniques, and discuss how differences between the systems affected performance
  19. Gilchrist, A.: Research and consultancy (1998) 0.01
    0.00576781 = product of:
      0.04614248 = sum of:
        0.04614248 = weight(_text_:work in 1394) [ClassicSimilarity], result of:
          0.04614248 = score(doc=1394,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.32441974 = fieldWeight in 1394, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0625 = fieldNorm(doc=1394)
      0.125 = coord(1/8)
    
    Source
    Library and information work worldwide 1998. Ed.: M.B. Line et al
  20. Gillman, P.: Text retrieval (1998) 0.01
    0.00576781 = product of:
      0.04614248 = sum of:
        0.04614248 = weight(_text_:work in 1502) [ClassicSimilarity], result of:
          0.04614248 = score(doc=1502,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.32441974 = fieldWeight in 1502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0625 = fieldNorm(doc=1502)
      0.125 = coord(1/8)
    
    Abstract
    Considers some of the papers given at the 1997 Text Retrieval conference (TR 97) in the context of the development of text retrieval software and research, from the Cranfield experiments of the early 1960s up to the recent TREC tests. Suggests that the primitive techniques currently employed for searching the WWW appear to ignore all the serious work done on information retrieval over the past 4 decades

Languages

  • e 60
  • d 5
  • f 1
  • More… Less…

Types

  • a 59
  • s 5
  • m 4
  • el 2
  • x 1
  • More… Less…