Search (396 results, page 1 of 20)

  • × theme_ss:"Retrievalstudien"
  1. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.10
    0.09914476 = product of:
      0.19828951 = sum of:
        0.01834164 = weight(_text_:information in 1184) [ClassicSimilarity], result of:
          0.01834164 = score(doc=1184,freq=12.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.23754507 = fieldWeight in 1184, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.0764742 = weight(_text_:united in 1184) [ClassicSimilarity], result of:
          0.0764742 = score(doc=1184,freq=2.0), product of:
            0.24675635 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.043984205 = queryNorm
            0.30991787 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.103473686 = sum of:
          0.07367742 = weight(_text_:states in 1184) [ClassicSimilarity], result of:
            0.07367742 = score(doc=1184,freq=2.0), product of:
              0.24220218 = queryWeight, product of:
                5.506572 = idf(docFreq=487, maxDocs=44218)
                0.043984205 = queryNorm
              0.304198 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.506572 = idf(docFreq=487, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.029796265 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.029796265 = score(doc=1184,freq=2.0), product of:
              0.1540252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043984205 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.5 = coord(3/6)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  2. Frei, H.P.; Meienberg, S.; Schäuble, P.: ¬The perils of interpreting recall and precision values (1991) 0.04
    0.03590906 = product of:
      0.10772718 = sum of:
        0.020751199 = weight(_text_:information in 786) [ClassicSimilarity], result of:
          0.020751199 = score(doc=786,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2687516 = fieldWeight in 786, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=786)
        0.08697598 = weight(_text_:networks in 786) [ClassicSimilarity], result of:
          0.08697598 = score(doc=786,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.4180698 = fieldWeight in 786, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0625 = fieldNorm(doc=786)
      0.33333334 = coord(2/6)
    
    Abstract
    The traditional recall and precision measure is inappropriate when retrieval algorithms that retrieve information from Wide Area Networks are evaluated. The principle reason is that information available in WANs is dynamic and its size os orders of magnitude greater than the size of the usual test collections. To overcome these problems, a new efffectiveness measure has been developed, which we call the 'usefulness measure'
    Source
    Information retrieval: GI/GMD-Workshop, Darmstadt, 23.-24.6.1991: Proceedings. Ed.: N. Fuhr
  3. Salampasis, M.; Tait, J.; Bloor, C.: Evaluation of information-seeking performance in hypermedia digital libraries (1998) 0.03
    0.033927426 = product of:
      0.10178228 = sum of:
        0.025678296 = weight(_text_:information in 3759) [ClassicSimilarity], result of:
          0.025678296 = score(doc=3759,freq=12.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3325631 = fieldWeight in 3759, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3759)
        0.07610398 = weight(_text_:networks in 3759) [ClassicSimilarity], result of:
          0.07610398 = score(doc=3759,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.36581108 = fieldWeight in 3759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3759)
      0.33333334 = coord(2/6)
    
    Abstract
    Discusses current information retrieval methods based on recall (R) and precision (P) for evaluating information retrieval and examines their suitability for evaluating the performance of hypermedia digital libraries. Proposes a new quantitative evaluation methodology, based on the structural analysis of hypermedia networks and the navigational and search state patterns of information seekers. Although the proposed methodology retains some of the characteristics of R and P evaluation, it could be more suitable than them for measuring the performance of information-seeking environments where information seekers can utilize arbitrary mixtures of browsing and query-based searching strategies
  4. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.02
    0.024739172 = product of:
      0.07421751 = sum of:
        0.0089855315 = weight(_text_:information in 2290) [ClassicSimilarity], result of:
          0.0089855315 = score(doc=2290,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.116372846 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
        0.06523198 = weight(_text_:networks in 2290) [ClassicSimilarity], result of:
          0.06523198 = score(doc=2290,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.31355235 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.33333334 = coord(2/6)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
    Source
    Journal of the American Society for Information Science. 40(1989), S.110-114
  5. Balog, K.; Schuth, A.; Dekker, P.; Tavakolpoursaleh, N.; Schaer, P.; Chuang, P.-Y.: Overview of the TREC 2016 Open Search track Academic Search Edition (2016) 0.02
    0.023640882 = product of:
      0.07092264 = sum of:
        0.011980709 = weight(_text_:information in 43) [ClassicSimilarity], result of:
          0.011980709 = score(doc=43,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1551638 = fieldWeight in 43, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=43)
        0.05894193 = product of:
          0.11788386 = sum of:
            0.11788386 = weight(_text_:states in 43) [ClassicSimilarity], result of:
              0.11788386 = score(doc=43,freq=2.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.48671678 = fieldWeight in 43, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.0625 = fieldNorm(doc=43)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    We present the TREC Open Search track, which represents a new evaluation paradigm for information retrieval. It offers the possibility for researchers to evaluate their approaches in a live setting, with real, unsuspecting users of an existing search engine. The first edition of the track focuses on the academic search domain and features the ad-hoc scientific literature search task. We report on experiments with three different academic search engines: Cite-SeerX, SSOAR, and Microsoft Academic Search.
    Source
    TREC 2016, Gaithersburg, Unites States
  6. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.02089367 = product of:
      0.06268101 = sum of:
        0.020966241 = weight(_text_:information in 6438) [ClassicSimilarity], result of:
          0.020966241 = score(doc=6438,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 6438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=6438)
        0.04171477 = product of:
          0.08342954 = sum of:
            0.08342954 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08342954 = score(doc=6438,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    11. 8.2001 16:22:19
    Source
    Information processing and management. 36(2000) no.1, S.3-36
  7. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.02089367 = product of:
      0.06268101 = sum of:
        0.020966241 = weight(_text_:information in 5089) [ClassicSimilarity], result of:
          0.020966241 = score(doc=5089,freq=2.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=5089)
        0.04171477 = product of:
          0.08342954 = sum of:
            0.08342954 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.08342954 = score(doc=5089,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 7.2006 18:43:54
    Source
    Journal of the American Society for Information Science. 41(1990) no.4, S.272-281
  8. Ellis, D.: Progress and problems in information retrieval (1996) 0.02
    0.01992638 = product of:
      0.059779137 = sum of:
        0.035942126 = weight(_text_:information in 789) [ClassicSimilarity], result of:
          0.035942126 = score(doc=789,freq=18.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.46549135 = fieldWeight in 789, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.023837011 = product of:
          0.047674023 = sum of:
            0.047674023 = weight(_text_:22 in 789) [ClassicSimilarity], result of:
              0.047674023 = score(doc=789,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.30952093 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    An introduction to the principal generic approaches to information retrieval research with their associated concepts, models and systems, this text is designed to keep the information professional up to date with the major themes and developments that have preoccupied researchers in recent month in relation to textual and documentary retrieval systems.
    COMPASS
    Information retrieval
    Content
    First published 1991 as New horizons in information retrieval
    Date
    26. 7.2002 20:22:46
    Footnote
    Rez. in: Managing information 3(1996) no.10, S.49 (D. Bawden); Program 32(1998) no.2, S.190-192 (C. Revie)
    LCSH
    Information retrieval
    Subject
    Information retrieval
    Information retrieval
  9. Leiva-Mederos, A.; Senso, J.A.; Hidalgo-Delgado, Y.; Hipola, P.: Working framework of semantic interoperability for CRIS with heterogeneous data sources (2017) 0.02
    0.018960943 = product of:
      0.05688283 = sum of:
        0.01339484 = weight(_text_:information in 3706) [ClassicSimilarity], result of:
          0.01339484 = score(doc=3706,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1734784 = fieldWeight in 3706, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3706)
        0.04348799 = weight(_text_:networks in 3706) [ClassicSimilarity], result of:
          0.04348799 = score(doc=3706,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.2090349 = fieldWeight in 3706, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=3706)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources. Design/methodology/approach After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the "dimensions" included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file. Findings The paper also includes an evaluation based on the comparison - by means of calculations of recall and precision - of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information. Originality/value The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.
  10. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.02
    0.018578421 = product of:
      0.05573526 = sum of:
        0.025938997 = weight(_text_:information in 2417) [ClassicSimilarity], result of:
          0.025938997 = score(doc=2417,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.3359395 = fieldWeight in 2417, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2417)
        0.029796265 = product of:
          0.05959253 = sum of:
            0.05959253 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.05959253 = score(doc=2417,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Pages
    S.22-25
    Series
    Proceedings of the American Society for Information Science; vol. 20
    Source
    Productivity in the information age : proceedings of the 46th ASIS annual meeting, 1983. Ed.: Raymond F Vondra
  11. Naderi, H.; Rumpler, B.: PERCIRS: a system to combine personalized and collaborative information retrieval (2010) 0.02
    0.01795453 = product of:
      0.05386359 = sum of:
        0.010375599 = weight(_text_:information in 3960) [ClassicSimilarity], result of:
          0.010375599 = score(doc=3960,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.1343758 = fieldWeight in 3960, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3960)
        0.04348799 = weight(_text_:networks in 3960) [ClassicSimilarity], result of:
          0.04348799 = score(doc=3960,freq=2.0), product of:
            0.20804176 = queryWeight, product of:
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.043984205 = queryNorm
            0.2090349 = fieldWeight in 3960, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.72992 = idf(docFreq=1060, maxDocs=44218)
              0.03125 = fieldNorm(doc=3960)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - This paper aims to discuss and test the claim that utilization of the personalization techniques can be valuable to improve the efficiency of collaborative information retrieval (CIR) systems. Design/methodology/approach - A new personalized CIR system, called PERCIRS, is presented based on the user profile similarity calculation (UPSC) formulas. To this aim, the paper proposes several UPSC formulas as well as two techniques to evaluate them. As the proposed CIR system is personalized, it could not be evaluated by Cranfield, like evaluation techniques (e.g. TREC). Hence, this paper proposes a new user-centric mechanism, which enables PERCIRS to be evaluated. This mechanism is generic and can be used to evaluate any other personalized IR system. Findings - The results show that among the proposed UPSC formulas in this paper, the (query-document)-graph based formula is the most effective. After integrating this formula into PERCIRS and comparing it with nine other IR systems, it is concluded that the results of the system are better than the other IR systems. In addition, the paper shows that the complexity of the system is less that the complexity of the other CIR systems. Research limitations/implications - This system asks the users to explicitly rank the returned documents, while explicit ranking is still not widespread enough. However it believes that the users should actively participate in the IR process in order to aptly satisfy their needs to information. Originality/value - The value of this paper lies in combining collaborative and personalized IR, as well as introducing a mechanism which enables the personalized IR system to be evaluated. The proposed evaluation mechanism is very valuable for developers of personalized IR systems. The paper also introduces some significant user profile similarity calculation formulas, and two techniques to evaluate them. These formulas can also be used to find the user's community in the social networks.
  12. Jansen, B.J.; McNeese, M.D.: Evaluating the Effectiveness of and Patterns of Interactions With Automated Searching Assistance (2005) 0.02
    0.016602736 = product of:
      0.049808208 = sum of:
        0.0129694985 = weight(_text_:information in 4815) [ClassicSimilarity], result of:
          0.0129694985 = score(doc=4815,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.16796975 = fieldWeight in 4815, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4815)
        0.03683871 = product of:
          0.07367742 = sum of:
            0.07367742 = weight(_text_:states in 4815) [ClassicSimilarity], result of:
              0.07367742 = score(doc=4815,freq=2.0), product of:
                0.24220218 = queryWeight, product of:
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.043984205 = queryNorm
                0.304198 = fieldWeight in 4815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.506572 = idf(docFreq=487, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4815)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    We report quantitative and qualitative results of an empirical evaluation to determine whether automated assistance improves searching performance and when searchers desire system intervention in the search process. Forty participants interacted with two fully functional information retrieval systems in a counterbalanced, within-participant study. The systems were identical in all respects except that one offered automated assistance and the other did not. The study used a client-side automated assistance application, an approximately 500,000-document Text REtrieval Conference content collection, and six topics. Results indicate that automated assistance can improve searching performance. However, the improvement is less dramatic than one might expect, with an approximately 20% performance increase, as measured by the number of userselected relevant documents. Concerning patterns of interaction, we identified 1,879 occurrences of searchersystem interactions and classified them into 9 major categories and 27 subcategories or states. Results indicate that there are predictable patterns of times when searchers desire and implement searching assistance. The most common three-state pattern is Execute Query-View Results: With Scrolling-View Assistance. Searchers appear receptive to automated assistance; there is a 71% implementation rate. There does not seem to be a correlation between the use of assistance and previous searching performance. We discuss the implications for the design of information retrieval systems and future research directions.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.14, S.1480-1503
  13. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.01
    0.014862737 = product of:
      0.04458821 = sum of:
        0.020751199 = weight(_text_:information in 4049) [ClassicSimilarity], result of:
          0.020751199 = score(doc=4049,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2687516 = fieldWeight in 4049, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4049)
        0.023837011 = product of:
          0.047674023 = sum of:
            0.047674023 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
              0.047674023 = score(doc=4049,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.30952093 = fieldWeight in 4049, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4049)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
    Imprint
    Gaithersburg, MD : National Institute of Standards / Information Technology Laboratory
  14. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.01
    0.013941209 = product of:
      0.041823626 = sum of:
        0.020966241 = weight(_text_:information in 7302) [ClassicSimilarity], result of:
          0.020966241 = score(doc=7302,freq=8.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.27153665 = fieldWeight in 7302, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7302)
        0.020857384 = product of:
          0.04171477 = sum of:
            0.04171477 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.04171477 = score(doc=7302,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
    Source
    Information processing and management. 30(1994) no.2, S.205-221
  15. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.013593431 = product of:
      0.04078029 = sum of:
        0.016943282 = weight(_text_:information in 6971) [ClassicSimilarity], result of:
          0.016943282 = score(doc=6971,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.21943474 = fieldWeight in 6971, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
        0.023837011 = product of:
          0.047674023 = sum of:
            0.047674023 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.047674023 = score(doc=6971,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  16. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.01
    0.013593431 = product of:
      0.04078029 = sum of:
        0.016943282 = weight(_text_:information in 744) [ClassicSimilarity], result of:
          0.016943282 = score(doc=744,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.21943474 = fieldWeight in 744, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=744)
        0.023837011 = product of:
          0.047674023 = sum of:
            0.047674023 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.047674023 = score(doc=744,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  17. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.01
    0.013593431 = product of:
      0.04078029 = sum of:
        0.016943282 = weight(_text_:information in 3087) [ClassicSimilarity], result of:
          0.016943282 = score(doc=3087,freq=4.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.21943474 = fieldWeight in 3087, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3087)
        0.023837011 = product of:
          0.047674023 = sum of:
            0.047674023 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
              0.047674023 = score(doc=3087,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.30952093 = fieldWeight in 3087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3087)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  18. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.01
    0.013004894 = product of:
      0.039014682 = sum of:
        0.018157298 = weight(_text_:information in 3368) [ClassicSimilarity], result of:
          0.018157298 = score(doc=3368,freq=6.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.23515764 = fieldWeight in 3368, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.020857384 = product of:
          0.04171477 = sum of:
            0.04171477 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.04171477 = score(doc=3368,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.555-572
  19. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.01
    0.01285903 = product of:
      0.038577087 = sum of:
        0.023678957 = weight(_text_:information in 2339) [ClassicSimilarity], result of:
          0.023678957 = score(doc=2339,freq=20.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.30666938 = fieldWeight in 2339, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.0148981325 = product of:
          0.029796265 = sum of:
            0.029796265 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.029796265 = score(doc=2339,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  20. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.01
    0.012656673 = product of:
      0.037970018 = sum of:
        0.02009226 = weight(_text_:information in 6967) [ClassicSimilarity], result of:
          0.02009226 = score(doc=6967,freq=10.0), product of:
            0.0772133 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.043984205 = queryNorm
            0.2602176 = fieldWeight in 6967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6967)
        0.017877758 = product of:
          0.035755515 = sum of:
            0.035755515 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
              0.035755515 = score(doc=6967,freq=2.0), product of:
                0.1540252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043984205 = queryNorm
                0.23214069 = fieldWeight in 6967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6967)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon

Languages

Types

  • a 369
  • s 14
  • el 9
  • m 9
  • r 3
  • x 3
  • p 1
  • More… Less…