Search (14 results, page 1 of 1)

  • × author_ss:"Kantor, P.B."
  1. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.009103368 = product of:
      0.031861786 = sum of:
        0.007323784 = product of:
          0.03661892 = sum of:
            0.03661892 = weight(_text_:retrieval in 3107) [ClassicSimilarity], result of:
              0.03661892 = score(doc=3107,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33420905 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.2 = coord(1/5)
        0.024538001 = product of:
          0.049076002 = sum of:
            0.049076002 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.049076002 = score(doc=3107,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    27. 2.1999 20:59:22
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  2. Kantor, P.B.: Information retrieval techniques (1994) 0.01
    0.005000235 = product of:
      0.035001643 = sum of:
        0.035001643 = product of:
          0.087504104 = sum of:
            0.053818595 = weight(_text_:retrieval in 1056) [ClassicSimilarity], result of:
              0.053818595 = score(doc=1056,freq=12.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.49118498 = fieldWeight in 1056, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1056)
            0.033685513 = weight(_text_:system in 1056) [ClassicSimilarity], result of:
              0.033685513 = score(doc=1056,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.29527056 = fieldWeight in 1056, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1056)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    State of the art review of information retrieval techniques viewed in terms of the growing effort to implement concept based retrieval in content based algorithms. Identifies trends in the automation of indexing, retrieval, and the interaction between systems and users. Identifies 3 central issues: ways in which systems describe documents for purposes of information retrieval; ways in which systems compute the degree of match between a given document and the current state of the query; amd what the systems do with the information that they obtain from the users. Looks at information retrieval techniques in terms of: location, navigation; indexing; documents; queries; structures; concepts; matching documents to queries; restoring query structure; algorithms and content versus concepts; formulation of concepts in terms of contents; formulation of concepts with the assistance of the users; complex system codes versus underlying principles; and system evaluation
  3. Kantor, P.B.: Mathematical models in information science (2002) 0.00
    0.0049076 = product of:
      0.0343532 = sum of:
        0.0343532 = product of:
          0.0687064 = sum of:
            0.0687064 = weight(_text_:22 in 4112) [ClassicSimilarity], result of:
              0.0687064 = score(doc=4112,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5416616 = fieldWeight in 4112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4112)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Bulletin of the American Society for Information Science. 28(2002) no.6, S.22-24
  4. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.00
    0.0024538 = product of:
      0.0171766 = sum of:
        0.0171766 = product of:
          0.0343532 = sum of:
            0.0343532 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
              0.0343532 = score(doc=5267,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2708308 = fieldWeight in 5267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5267)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 15:05:39
  5. Kantor, P.B.; Voorhees, E.: Information retrieval with scanned texts (2000) 0.00
    0.0023674043 = product of:
      0.01657183 = sum of:
        0.01657183 = product of:
          0.08285914 = sum of:
            0.08285914 = weight(_text_:retrieval in 3901) [ClassicSimilarity], result of:
              0.08285914 = score(doc=3901,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.75622874 = fieldWeight in 3901, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.125 = fieldNorm(doc=3901)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    Information retrieval. 2(2000), S.165-176
  6. Ng, K.B.; Kantor, P.B.: Two experiments on retrieval with corrupted data and clean queries in the TREC4 adhoc task environment : data fusion and pattern scanning (1996) 0.00
    0.0017755532 = product of:
      0.012428872 = sum of:
        0.012428872 = product of:
          0.06214436 = sum of:
            0.06214436 = weight(_text_:retrieval in 7571) [ClassicSimilarity], result of:
              0.06214436 = score(doc=7571,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5671716 = fieldWeight in 7571, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7571)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  7. Sun, Y.; Kantor, P.B.: Cross-evaluation : a new model for information system evaluation (2006) 0.00
    0.0016040723 = product of:
      0.0112285055 = sum of:
        0.0112285055 = product of:
          0.056142528 = sum of:
            0.056142528 = weight(_text_:system in 5048) [ClassicSimilarity], result of:
              0.056142528 = score(doc=5048,freq=16.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.49211764 = fieldWeight in 5048, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5048)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the selfjudgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the "threerealities" paradigm, which calls for "real tasks," "real users," and "real systems."
  8. Shapira, B.; Kantor, P.B.; Melamed, B.: ¬The effect of extrinsic motivation on user behavior in a collaborative information finding system (2001) 0.00
    0.0015217565 = product of:
      0.010652295 = sum of:
        0.010652295 = product of:
          0.053261478 = sum of:
            0.053261478 = weight(_text_:system in 6525) [ClassicSimilarity], result of:
              0.053261478 = score(doc=6525,freq=10.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46686378 = fieldWeight in 6525, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6525)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In collaborative information finding systems, evaluations provided by users assist other users with similar needs. This article examines the problem of getting users to provide evaluations, thus overcoming the so-called "free-riding" behavior of users. Free riders are those who use the information provided by others without contributing evaluations of their own. This article reports on an experiment conducted using the "AntWorld," system, a collaborative information finding system for the Internet, to explore the effect of added motivation on users' behavior. The findings suggest that for the system to be effective, users must be motivated either by the environment, or by incentives within the system. The findings suggest that relatively inexpensive extrinsic motivators can produce modest but significant increases in cooperative behavior
  9. Kantor, P.B.; Lee, J.J.: Testing the maximum entropy principle for information retrieval (1998) 0.00
    8.3700387E-4 = product of:
      0.0058590267 = sum of:
        0.0058590267 = product of:
          0.029295133 = sum of:
            0.029295133 = weight(_text_:retrieval in 3266) [ClassicSimilarity], result of:
              0.029295133 = score(doc=3266,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.26736724 = fieldWeight in 3266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3266)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  10. Kantor, P.B.: ¬The Adaptive Network Library Interface : a historical overview and interim report (1993) 0.00
    7.9397525E-4 = product of:
      0.0055578267 = sum of:
        0.0055578267 = product of:
          0.027789133 = sum of:
            0.027789133 = weight(_text_:system in 6976) [ClassicSimilarity], result of:
              0.027789133 = score(doc=6976,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2435858 = fieldWeight in 6976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6976)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Describes the evolution of the concept of an Adaptive Network Library Interface (ANLI) and explores several technical and research issues. The ANLI is a computer program that stands as a buffer between users of the library catalogue and the catalogue itself. This buffer unit maintains its own network of pointers from book to book, which it elicits from the users, interactively. It is hoped that such a buffer increases the value of the catalogue for users and provides librarians with new and useful information about the books in the collection. Explores the relationship between this system and hypertext and neural networks
  11. Ng, K.B.; Kantor, P.B.; Strzalkowski, T.; Wacholder, N.; Tang, R.; Bai, B.; Rittman,; Song, P.; Sun, Y.: Automated judgment of document qualities (2006) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 182) [ClassicSimilarity], result of:
              0.023819257 = score(doc=182,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=182)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The authors report on a series of experiments to automate the assessment of document qualities such as depth and objectivity. The primary purpose is to develop a quality-sensitive functionality, orthogonal to relevance, to select documents for an interactive question-answering system. The study consisted of two stages. In the classifier construction stage, nine document qualities deemed important by information professionals were identified and classifiers were developed to predict their values. In the confirmative evaluation stage, the performance of the developed methods was checked using a different document collection. The quality prediction methods worked well in the second stage. The results strongly suggest that the best way to predict document qualities automatically is to construct classifiers on a person-by-person basis.
  12. Sun, Y.; Kantor, P.B.; Morse, E.L.: Using cross-evaluation to evaluate interactive QA systems (2011) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 4744) [ClassicSimilarity], result of:
              0.023819257 = score(doc=4744,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 4744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4744)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency.
  13. Menkov, V.; Ginsparg, P.; Kantor, P.B.: Recommendations and privacy in the arXiv system : a simulation experiment using historical data (2020) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 5671) [ClassicSimilarity], result of:
              0.023819257 = score(doc=5671,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 5671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5671)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
  14. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.00
    5.2312744E-4 = product of:
      0.003661892 = sum of:
        0.003661892 = product of:
          0.01830946 = sum of:
            0.01830946 = weight(_text_:retrieval in 6684) [ClassicSimilarity], result of:
              0.01830946 = score(doc=6684,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.16710453 = fieldWeight in 6684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6684)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents