Search (3 results, page 1 of 1)

  • × author_ss:"Kantor, P."
  1. Saracevic, T.; Kantor, P.; Chamis, A.Y.: ¬A study of information seeking and retrieving : pt.1: Background and methodology (1988) 0.00
    0.0021291035 = product of:
      0.01277462 = sum of:
        0.01277462 = weight(_text_:in in 1963) [ClassicSimilarity], result of:
          0.01277462 = score(doc=1963,freq=10.0), product of:
            0.06335582 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046576444 = queryNorm
            0.20163295 = fieldWeight in 1963, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1963)
      0.16666667 = coord(1/6)
    
    Abstract
    The objectives of the study were to conduct a series of observations and experiments under as real-life a situation as possible related to: (i) user context of questions in information retrieval; (ii) the structure and classification of questions; (iii) cognitive traits and decision making of searchers; and (iv) different searches of the same question. The study is presented in three parts: part 1 presents the background of the study and describes the models, measures, methods, procedures and statistical analyses used. Pt.2 is devoted to results related to users, questions, and effectiveness measures, and pt.3 to results related to searchers, searches, and overlap studies. A concluding summary of all results is presented in pt.3
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. Sam Francisco: Morgan Kaufmann 1997. S.175-190.
  2. Kantor, P.; Kim, M.H.; Ibraev, U.; Atasoy, K.: Estimating the number of relevant documents in enormous collections (1999) 0.00
    0.0017742527 = product of:
      0.010645516 = sum of:
        0.010645516 = weight(_text_:in in 6690) [ClassicSimilarity], result of:
          0.010645516 = score(doc=6690,freq=10.0), product of:
            0.06335582 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046576444 = queryNorm
            0.16802745 = fieldWeight in 6690, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6690)
      0.16666667 = coord(1/6)
    
    Abstract
    In assessing information retrieval systems, it is important to know not only the precision of the retrieved set, but also to compare the number of retrieved relevant items to the total number of relevant items. For large collections, such as the TREC test collections, or the World Wide Web, it is not possible to enumerate the entire set of relevant documents. If the retrieved documents are evaluated, a variant of the statistical "capture-recapture" method can be used to estimate the total number of relevant documents, providing the several retrieval systems used are sufficiently independent. We show that the underlying signal detection model supporting such an analysis can be extended in two ways. First, assuming that there are two distinct performance characteristics (corresponding to the chance of retrieving a relevant, and retrieving a given non-relevant document), we show that if there are three or more independent systems available it is possible to estimate the number of relevant documents without actually having to decide whether each individual document is relevant. We report applications of this 3-system method to the TREC data, leading to the conclusion that the independence assumptions are not satisfied. We then extend the model to a multi-system, multi-problem model, and show that it is possible to include statistical dependencies of all orders in the model, and determine the number of relevant documents for each of the problems in the set. Application to the TREC setting will be presented
  3. Wacholder, N.; Kelly, D.; Kantor, P.; Rittman, R.; Sun, Y.; Bai, B.; Small, S.; Yamrom, B.; Strzalkowski, T.: ¬A model for quantitative evaluation of an end-to-end question-answering system (2007) 0.00
    9.521639E-4 = product of:
      0.005712983 = sum of:
        0.005712983 = weight(_text_:in in 435) [ClassicSimilarity], result of:
          0.005712983 = score(doc=435,freq=2.0), product of:
            0.06335582 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046576444 = queryNorm
            0.09017298 = fieldWeight in 435, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=435)
      0.16666667 = coord(1/6)
    
    Abstract
    We describe a procedure for quantitative evaluation of interactive question-answering systems and illustrate it with application to the High-Quality Interactive QuestionAnswering (HITIQA) system. Our objectives were (a) to design a method to realistically and reliably assess interactive question-answering systems by comparing the quality of reports produced using different systems, (b) to conduct a pilot test of this method, and (c) to perform a formative evaluation of the HITIQA system. Far more important than the specific information gathered from this pilot evaluation is the development of (a) a protocol for evaluating an emerging technology, (b) reusable assessment instruments, and (c) the knowledge gained in conducting the evaluation. We conclude that this method, which uses a surprisingly small number of subjects and does not rely on predetermined relevance judgments, measures the impact of system change on work produced by users. Therefore this method can be used to compare the product of interactive systems that use different underlying technologies.