Search (8 results, page 1 of 1)

  • × author_ss:"Kantor, P.B."
  1. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.02
    0.024896696 = product of:
      0.099586785 = sum of:
        0.099586785 = sum of:
          0.054437865 = weight(_text_:model in 5267) [ClassicSimilarity], result of:
            0.054437865 = score(doc=5267,freq=2.0), product of:
              0.1830527 = queryWeight, product of:
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.047605187 = queryNorm
              0.29738903 = fieldWeight in 5267, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.845226 = idf(docFreq=2569, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
          0.045148917 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
            0.045148917 = score(doc=5267,freq=2.0), product of:
              0.16670525 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047605187 = queryNorm
              0.2708308 = fieldWeight in 5267, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
      0.25 = coord(1/4)
    
    Abstract
    The outputs of several information filtering (IF) systems can be combined to improve filtering performance. In this article the authors propose and explore a framework based on the so-called information structure (IS) model, which is frequently used in Information Economics, for combining the output of multiple IF systems according to each user's preferences (profile). The combination seeks to maximize the expected payoff to that user. The authors show analytically that the proposed framework increases users expected payoff from the combined filtering output for any user preferences. An experiment using the TREC-6 test collection confirms the theoretical findings.
    Date
    22. 7.2006 15:05:39
  2. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.02
    0.017903598 = product of:
      0.07161439 = sum of:
        0.07161439 = weight(_text_:space in 6684) [ClassicSimilarity], result of:
          0.07161439 = score(doc=6684,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.28827736 = fieldWeight in 6684, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6684)
      0.25 = coord(1/4)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents
  3. Kantor, P.B.: ¬A model for stopping behavior of the users of on-line systems (1987) 0.01
    0.013609466 = product of:
      0.054437865 = sum of:
        0.054437865 = product of:
          0.10887573 = sum of:
            0.10887573 = weight(_text_:model in 3945) [ClassicSimilarity], result of:
              0.10887573 = score(doc=3945,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.59477806 = fieldWeight in 3945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3945)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  4. Kantor, P.B.; Nordlie, R.: Models of the behavior of people searching the Internet : a Petri net approach (1999) 0.01
    0.011905802 = product of:
      0.04762321 = sum of:
        0.04762321 = product of:
          0.09524642 = sum of:
            0.09524642 = weight(_text_:model in 6712) [ClassicSimilarity], result of:
              0.09524642 = score(doc=6712,freq=12.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.5203224 = fieldWeight in 6712, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6712)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Previous models of searching behavior have taken as their foundation the Markov model of random processes. In this model, the next action that a user takes is determined by a probabilistic rule which is conditioned by the most recent experiences of the user. This model, which has achieved very limited success in describing real data, is at odds with the evidence of introspection in a crucial way. Introspection reveals that when we search we are, more or less, in a state of expectancy, which can be satisfied in a number of ways. In addition, the state can be modified by the accumulated evidence of our searches. The Markov model approach can not readily accommodate such persistence of intention and behavior. The Petri Net model, which has been developed to analyze the interdependencies among events in a communications network, can be adapted to this situation. In this adaptation, the so-called "transitions" of the Petri Net occur only when their necessary pre-conditions have been met. We are able to show that various key abstractions of information finding, such as "document relevance", "a desired number of relevant documents", "discouragement", "exhaustion" and "satisfaction" can all be modeled using the Petri Net framework. Further, we show that this model leads naturally to a new approach to the collection of user data, and to the analysis of transaction logs, by providing a far richer description of the user's present state, without inducing a combinatorial explosion
  5. Kantor, P.B.: Mathematical models in information science (2002) 0.01
    0.011287229 = product of:
      0.045148917 = sum of:
        0.045148917 = product of:
          0.09029783 = sum of:
            0.09029783 = weight(_text_:22 in 4112) [ClassicSimilarity], result of:
              0.09029783 = score(doc=4112,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.5416616 = fieldWeight in 4112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4112)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Bulletin of the American Society for Information Science. 28(2002) no.6, S.22-24
  6. Sun, Y.; Kantor, P.B.: Cross-evaluation : a new model for information system evaluation (2006) 0.01
    0.010868462 = product of:
      0.043473847 = sum of:
        0.043473847 = product of:
          0.086947694 = sum of:
            0.086947694 = weight(_text_:model in 5048) [ClassicSimilarity], result of:
              0.086947694 = score(doc=5048,freq=10.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.4749872 = fieldWeight in 5048, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5048)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the selfjudgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the "threerealities" paradigm, which calls for "real tasks," "real users," and "real systems."
  7. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.008062307 = product of:
      0.032249227 = sum of:
        0.032249227 = product of:
          0.064498454 = sum of:
            0.064498454 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.064498454 = score(doc=3107,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 2.1999 20:59:22
  8. Kantor, P.B.; Saracevic, T.: Quantitative study of the value of research libraries : a foundation for the evaluation of digital libraries (1999) 0.00
    0.0048605236 = product of:
      0.019442094 = sum of:
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 6711) [ClassicSimilarity], result of:
              0.03888419 = score(doc=6711,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 6711, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6711)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In anticipation of the explosive growth of digital libraries, a complex study was undertaken seeking to evaluate 21 diverse services at 5 major academic research libraries. This work stands as a model for evaluation of digital libraries, through its focus on both the costs of operations and the impacts of the services that those operations provide. The data have been analyzed using both statistical methods and methods of Data Envelopment Analysis. The results of the study, which are presented in detail, serve to demonstrate that a cross-functional approach to library services is feasible. They also highlight a new measure of impact, which is a weighted logarithmic combination of the amount of time that users spend interacting with the service, combined with a Likert-scale indication of the value of that service in relation to the time expended. The measure derived, incorporating simple information obtainable from the user, together with information which is readily available in server/client logs, provides an excellent foundation for transferring these measurement principles to the Digital Library environment