Search (116 results, page 1 of 6)

  • × theme_ss:"Retrievalstudien"
  1. Croft, W.B.; Thompson, R.H.: Support for browsing in an intelligent text retrieval system (1989) 0.02
    0.023351924 = product of:
      0.0934077 = sum of:
        0.0934077 = product of:
          0.1868154 = sum of:
            0.1868154 = weight(_text_:intelligent in 5004) [ClassicSimilarity], result of:
              0.1868154 = score(doc=5004,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.871326 = fieldWeight in 5004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5004)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  2. Shafique, M.; Chaudhry, A.S.: Intelligent agent-based online information retrieval (1995) 0.02
    0.022378497 = product of:
      0.08951399 = sum of:
        0.08951399 = product of:
          0.17902797 = sum of:
            0.17902797 = weight(_text_:intelligent in 3851) [ClassicSimilarity], result of:
              0.17902797 = score(doc=3851,freq=10.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.8350047 = fieldWeight in 3851, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3851)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Describes an intelligent agent based information retrieval model. The relevance matrix used by the intelligent agent consists of rows and columns; rows represent the documents and columns are used for keywords. Entries represent predetermined weights of keywords in documents. The search/query vector is constructed by the intelligent agent through explicit interaction with the user, using an interactive query refinement techniques. With manipulation of the relevance matrix against the search vector, the agent uses the manipulated information to filter the document representations and retrieve the most relevant documents, consequently improving the retrieval performance. Work is in progress on an experiment to compare the retrieval results from a conventional retrieval model and an intelligent agent based retrieval model. A test document collection on artificial intelligence has been selected as a sample. Retrieval tests are being carried out on a selected group of researchers using the 2 retrieval systems. Results will be compared to assess the retrieval performance using precision and recall matrices
  3. Cole, C.: Intelligent information retrieval : Part IV: Testing the timing of two information retrieval devices in a naturalistic setting (2001) 0.02
    0.020015934 = product of:
      0.08006374 = sum of:
        0.08006374 = product of:
          0.16012748 = sum of:
            0.16012748 = weight(_text_:intelligent in 365) [ClassicSimilarity], result of:
              0.16012748 = score(doc=365,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.74685085 = fieldWeight in 365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.09375 = fieldNorm(doc=365)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  4. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.02
    0.018871205 = product of:
      0.07548482 = sum of:
        0.07548482 = product of:
          0.15096964 = sum of:
            0.15096964 = weight(_text_:intelligent in 3845) [ClassicSimilarity], result of:
              0.15096964 = score(doc=3845,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.70413774 = fieldWeight in 3845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
  5. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.02
    0.017458716 = product of:
      0.034917433 = sum of:
        0.014290273 = product of:
          0.04287082 = sum of:
            0.04287082 = weight(_text_:k in 744) [ClassicSimilarity], result of:
              0.04287082 = score(doc=744,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.31552678 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.33333334 = coord(1/3)
        0.02062716 = product of:
          0.04125432 = sum of:
            0.04125432 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
              0.04125432 = score(doc=744,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.30952093 = fieldWeight in 744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=744)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    1. 8.1996 22:01:00
  6. Feldman, S.: Testing natural language : comparing DIALOG, TARGET, and DR-LINK (1996) 0.01
    0.013343956 = product of:
      0.053375825 = sum of:
        0.053375825 = product of:
          0.10675165 = sum of:
            0.10675165 = weight(_text_:intelligent in 7463) [ClassicSimilarity], result of:
              0.10675165 = score(doc=7463,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.49790058 = fieldWeight in 7463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7463)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Compares online searching of DIALOG (a traditional Boolean system), TARGET (a relevance ranking system) and DR-LINK (an advanced intelligent text processing system), in order to establish the differing strengths of traditional and natural language processing search systems. Details example search queries used in comparison and how each of the systems performed. Considers the implications of the findings for professional information searchers and end users. Natural language processing systems are useful because they develop an wider understanding of queries that use of traditional systems may not
  7. Kutlu, M.; Elsayed, T.; Lease, M.: Intelligent topic selection for low-cost information retrieval evaluation : a new perspective on deep vs. shallow judging (2018) 0.01
    0.013343956 = product of:
      0.053375825 = sum of:
        0.053375825 = product of:
          0.10675165 = sum of:
            0.10675165 = weight(_text_:intelligent in 5092) [ClassicSimilarity], result of:
              0.10675165 = score(doc=5092,freq=8.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.49790058 = fieldWeight in 5092, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5092)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of today's massive document collections (e.g., ClueWeb12's 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
  8. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.01
    0.013094037 = product of:
      0.026188074 = sum of:
        0.010717705 = product of:
          0.032153115 = sum of:
            0.032153115 = weight(_text_:k in 2552) [ClassicSimilarity], result of:
              0.032153115 = score(doc=2552,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23664509 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.33333334 = coord(1/3)
        0.015470369 = product of:
          0.030940738 = sum of:
            0.030940738 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.030940738 = score(doc=2552,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    9. 2.1997 18:44:22
  9. Sparck Jones, K.: Retrieval system tests 1958-1978 (1981) 0.01
    0.010104749 = product of:
      0.040418997 = sum of:
        0.040418997 = product of:
          0.121256985 = sum of:
            0.121256985 = weight(_text_:k in 3156) [ClassicSimilarity], result of:
              0.121256985 = score(doc=3156,freq=4.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.8924445 = fieldWeight in 3156, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3156)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones
  10. Sparck Jones, K.: ¬The Cranfield tests (1981) 0.01
    0.010104749 = product of:
      0.040418997 = sum of:
        0.040418997 = product of:
          0.121256985 = sum of:
            0.121256985 = weight(_text_:k in 3157) [ClassicSimilarity], result of:
              0.121256985 = score(doc=3157,freq=4.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.8924445 = fieldWeight in 3157, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3157)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information retrieval experiments. Ed.: K. Sparck Jones
  11. Guglielmo, E.J.; Rowe, N.C.: Natural-language retrieval of images based on descriptive captions (1996) 0.01
    0.010007967 = product of:
      0.04003187 = sum of:
        0.04003187 = product of:
          0.08006374 = sum of:
            0.08006374 = weight(_text_:intelligent in 6624) [ClassicSimilarity], result of:
              0.08006374 = score(doc=6624,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.37342542 = fieldWeight in 6624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6624)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Describes a prototype intelligent information retrieval system that uses natural-language understanding to efficiently locate captioned data. Multimedia data generally requires captions to explain its features and significance. Such descriptive captions often rely on long nominal compunds (strings of consecutive nouns) which create problems of ambiguous word sense. Presents a system in which captions and user queries are parsed and interpreted to produce a logical form, using a detailed theory of the meaning of nominal compounds. A fine-grain match can then compare the logical form of the query to the logical forms for each caption. To improve system efficiency, the system performs a coarse-grain match with index files, using nouns and verbs extracted from the query. Experiments with randomly selected queries and captions from an existing image library show an increase of 30% in precision and 50% in recall over the keyphrase approach currently used. Processing times have a median of 7 seconds as compared to 8 minutes for the existing system
  12. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.009024382 = product of:
      0.036097527 = sum of:
        0.036097527 = product of:
          0.07219505 = sum of:
            0.07219505 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.07219505 = score(doc=262,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20.10.2000 12:22:23
  13. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.009024382 = product of:
      0.036097527 = sum of:
        0.036097527 = product of:
          0.07219505 = sum of:
            0.07219505 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.07219505 = score(doc=6418,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Online. 22(1998) no.6, S.57-58
  14. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.009024382 = product of:
      0.036097527 = sum of:
        0.036097527 = product of:
          0.07219505 = sum of:
            0.07219505 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.07219505 = score(doc=6438,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    11. 8.2001 16:22:19
  15. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.009024382 = product of:
      0.036097527 = sum of:
        0.036097527 = product of:
          0.07219505 = sum of:
            0.07219505 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.07219505 = score(doc=5089,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:43:54
  16. Kuriyama, K.; Kando, N.; Nozue, T.; Eguchi, K.: Pooling for a large-scale test collection : an analysis of the search results from the First NTCIR Workshop (2002) 0.01
    0.007578561 = product of:
      0.030314244 = sum of:
        0.030314244 = product of:
          0.09094273 = sum of:
            0.09094273 = weight(_text_:k in 3830) [ClassicSimilarity], result of:
              0.09094273 = score(doc=3830,freq=4.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.66933334 = fieldWeight in 3830, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3830)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  17. Robertson, S.E.: ¬The methodology of information retrieval experiment (1981) 0.01
    0.0071451366 = product of:
      0.028580546 = sum of:
        0.028580546 = product of:
          0.08574164 = sum of:
            0.08574164 = weight(_text_:k in 3146) [ClassicSimilarity], result of:
              0.08574164 = score(doc=3146,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.63105357 = fieldWeight in 3146, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3146)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones
  18. Rijsbergen, C.J. van: Retrieval effectiveness (1981) 0.01
    0.0071451366 = product of:
      0.028580546 = sum of:
        0.028580546 = product of:
          0.08574164 = sum of:
            0.08574164 = weight(_text_:k in 3147) [ClassicSimilarity], result of:
              0.08574164 = score(doc=3147,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.63105357 = fieldWeight in 3147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3147)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones
  19. Belkin, N.J.: Ineffable concepts in information retrieval (1981) 0.01
    0.0071451366 = product of:
      0.028580546 = sum of:
        0.028580546 = product of:
          0.08574164 = sum of:
            0.08574164 = weight(_text_:k in 3148) [ClassicSimilarity], result of:
              0.08574164 = score(doc=3148,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.63105357 = fieldWeight in 3148, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3148)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones
  20. Tague, J.M.: ¬The pragmatics of information retrieval experimentation (1981) 0.01
    0.0071451366 = product of:
      0.028580546 = sum of:
        0.028580546 = product of:
          0.08574164 = sum of:
            0.08574164 = weight(_text_:k in 3149) [ClassicSimilarity], result of:
              0.08574164 = score(doc=3149,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.63105357 = fieldWeight in 3149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.125 = fieldNorm(doc=3149)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    Information retrieval experiment. Ed.: K. Sparck Jones

Languages

  • e 104
  • d 9
  • f 1
  • m 1
  • More… Less…

Types

  • a 104
  • s 8
  • m 5
  • el 2
  • r 1
  • More… Less…