Search (22 results, page 2 of 2)

  • × author_ss:"Kantor, P.B."
  • × language_ss:"e"
  • × type_ss:"a"
  1. Kantor, P.B.: Information retrieval techniques (1994) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 1056) [ClassicSimilarity], result of:
              0.005740611 = score(doc=1056,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 1056, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1056)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review of information retrieval techniques viewed in terms of the growing effort to implement concept based retrieval in content based algorithms. Identifies trends in the automation of indexing, retrieval, and the interaction between systems and users. Identifies 3 central issues: ways in which systems describe documents for purposes of information retrieval; ways in which systems compute the degree of match between a given document and the current state of the query; amd what the systems do with the information that they obtain from the users. Looks at information retrieval techniques in terms of: location, navigation; indexing; documents; queries; structures; concepts; matching documents to queries; restoring query structure; algorithms and content versus concepts; formulation of concepts in terms of contents; formulation of concepts with the assistance of the users; complex system codes versus underlying principles; and system evaluation
    Type
    a
  2. Sun, Y.; Kantor, P.B.; Morse, E.L.: Using cross-evaluation to evaluate interactive QA systems (2011) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 4744) [ClassicSimilarity], result of:
              0.005740611 = score(doc=4744,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 4744, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4744)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency.
    Type
    a