Search (15 results, page 1 of 1)

  • × author_ss:"Kantor, P.B."
  1. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.03
    0.02822417 = product of:
      0.042336255 = sum of:
        0.01867095 = weight(_text_:on in 5267) [ClassicSimilarity], result of:
          0.01867095 = score(doc=5267,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 5267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5267)
        0.023665305 = product of:
          0.04733061 = sum of:
            0.04733061 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
              0.04733061 = score(doc=5267,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2708308 = fieldWeight in 5267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5267)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The outputs of several information filtering (IF) systems can be combined to improve filtering performance. In this article the authors propose and explore a framework based on the so-called information structure (IS) model, which is frequently used in Information Economics, for combining the output of multiple IF systems according to each user's preferences (profile). The combination seeks to maximize the expected payoff to that user. The authors show analytically that the proposed framework increases users expected payoff from the combined filtering output for any user preferences. An experiment using the TREC-6 test collection confirms the theoretical findings.
    Date
    22. 7.2006 15:05:39
  2. Kantor, P.B.: Mathematical models in information science (2002) 0.02
    0.01577687 = product of:
      0.04733061 = sum of:
        0.04733061 = product of:
          0.09466122 = sum of:
            0.09466122 = weight(_text_:22 in 4112) [ClassicSimilarity], result of:
              0.09466122 = score(doc=4112,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5416616 = fieldWeight in 4112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4112)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Bulletin of the American Society for Information Science. 28(2002) no.6, S.22-24
  3. Kantor, P.B.: ¬The logic of weighted queries (1981) 0.01
    0.014225486 = product of:
      0.042676456 = sum of:
        0.042676456 = weight(_text_:on in 7622) [ClassicSimilarity], result of:
          0.042676456 = score(doc=7622,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3888053 = fieldWeight in 7622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.125 = fieldNorm(doc=7622)
      0.33333334 = coord(1/3)
    
    Source
    IEEE transactions on systems, man and cybernetics. 7(1981) S.816-821
  4. Kantor, P.B.: ¬A model for stopping behavior of the users of on-line systems (1987) 0.01
    0.0124473 = product of:
      0.0373419 = sum of:
        0.0373419 = weight(_text_:on in 3945) [ClassicSimilarity], result of:
          0.0373419 = score(doc=3945,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.34020463 = fieldWeight in 3945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=3945)
      0.33333334 = coord(1/3)
    
  5. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.011269193 = product of:
      0.03380758 = sum of:
        0.03380758 = product of:
          0.06761516 = sum of:
            0.06761516 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06761516 = score(doc=3107,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
  6. Ng, K.B.; Kantor, P.B.: Two experiments on retrieval with corrupted data and clean queries in the TREC4 adhoc task environment : data fusion and pattern scanning (1996) 0.01
    0.010669115 = product of:
      0.032007344 = sum of:
        0.032007344 = weight(_text_:on in 7571) [ClassicSimilarity], result of:
          0.032007344 = score(doc=7571,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 7571, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=7571)
      0.33333334 = coord(1/3)
    
  7. Menkov, V.; Ginsparg, P.; Kantor, P.B.: Recommendations and privacy in the arXiv system : a simulation experiment using historical data (2020) 0.01
    0.010669115 = product of:
      0.032007344 = sum of:
        0.032007344 = weight(_text_:on in 5671) [ClassicSimilarity], result of:
          0.032007344 = score(doc=5671,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 5671, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5671)
      0.33333334 = coord(1/3)
    
    Abstract
    Recommender systems may accelerate knowledge discovery in many fields. However, their users may be competitors guarding their ideas before publication or for other reasons. We describe a simulation experiment to assess user privacy against targeted attacks, modeling recommendations based on co-access data. The analysis uses an unusually long (14?years) set of anonymized historical data on user-item accesses. We introduce the notions of "visibility" and "discoverability." We find, based on historical data, that the majority of the actions of arXiv users would be potentially "visible" under targeted attack. However, "discoverability," which incorporates the difficulty of actually seeing a "visible" effect, is very much lower for nearly all users. We consider the effect of changes to the settings of the recommender algorithm on the visibility and discoverability of user actions and propose mitigation strategies that reduce both measures of risk.
  8. Shapira, B.; Kantor, P.B.; Melamed, B.: ¬The effect of extrinsic motivation on user behavior in a collaborative information finding system (2001) 0.01
    0.009239726 = product of:
      0.027719175 = sum of:
        0.027719175 = weight(_text_:on in 6525) [ClassicSimilarity], result of:
          0.027719175 = score(doc=6525,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25253648 = fieldWeight in 6525, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=6525)
      0.33333334 = coord(1/3)
    
    Abstract
    In collaborative information finding systems, evaluations provided by users assist other users with similar needs. This article examines the problem of getting users to provide evaluations, thus overcoming the so-called "free-riding" behavior of users. Free riders are those who use the information provided by others without contributing evaluations of their own. This article reports on an experiment conducted using the "AntWorld," system, a collaborative information finding system for the Internet, to explore the effect of added motivation on users' behavior. The findings suggest that for the system to be effective, users must be motivated either by the environment, or by incentives within the system. The findings suggest that relatively inexpensive extrinsic motivators can produce modest but significant increases in cooperative behavior
  9. Sun, Y.; Kantor, P.B.; Morse, E.L.: Using cross-evaluation to evaluate interactive QA systems (2011) 0.01
    0.009239726 = product of:
      0.027719175 = sum of:
        0.027719175 = weight(_text_:on in 4744) [ClassicSimilarity], result of:
          0.027719175 = score(doc=4744,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25253648 = fieldWeight in 4744, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4744)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we report on an experiment to assess the possibility of rigorous evaluation of interactive question-answering (QA) systems using the cross-evaluation method. This method takes into account the effects of tasks and context, and of the users of the systems. Statistical techniques are used to remove these effects, isolating the effect of the system itself. The results show that this approach yields meaningful measurements of the impact of systems on user task performance, using a surprisingly small number of subjects and without relying on predetermined judgments of the quality, or of the relevance of materials. We conclude that the method is indeed effective for comparing end-to-end QA systems, and for comparing interactive systems with high efficiency.
  10. Saracevic, T.; Kantor, P.B.: Studying the value of library and information services : Part II: Methodology and taxonomy (1997) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 353) [ClassicSimilarity], result of:
          0.02263261 = score(doc=353,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 353, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=353)
      0.33333334 = coord(1/3)
    
    Abstract
    Details with specifics of the study: importance of taxonomy; the method used for gathering data on user assessments of value in 5 research libraries, involving 18 services and 528 interviews with users; development and presentation of the taxonomy; and statistics and tests of the taxonomy. A novel aspect is the division of value of information services into 3 general classes or facets; reasons for use of a service in the given instance; quality of interaction (use) related to that service; and worth, benefits, or implications of subsequent results from use
    Footnote
    2nd part of a study to develop a taxonomy of value-in-use of library and information services based on users assessments and to propose methods and instruments for similar studies of library and information services in general
  11. Ng, K.B.; Kantor, P.B.; Strzalkowski, T.; Wacholder, N.; Tang, R.; Bai, B.; Rittman,; Song, P.; Sun, Y.: Automated judgment of document qualities (2006) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 182) [ClassicSimilarity], result of:
          0.02263261 = score(doc=182,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=182)
      0.33333334 = coord(1/3)
    
    Abstract
    The authors report on a series of experiments to automate the assessment of document qualities such as depth and objectivity. The primary purpose is to develop a quality-sensitive functionality, orthogonal to relevance, to select documents for an interactive question-answering system. The study consisted of two stages. In the classifier construction stage, nine document qualities deemed important by information professionals were identified and classifiers were developed to predict their values. In the confirmative evaluation stage, the performance of the developed methods was checked using a different document collection. The quality prediction methods worked well in the second stage. The results strongly suggest that the best way to predict document qualities automatically is to construct classifiers on a person-by-person basis.
  12. Kantor, P.B.: Information theory (2009) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 3815) [ClassicSimilarity], result of:
          0.021338228 = score(doc=3815,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 3815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=3815)
      0.33333334 = coord(1/3)
    
    Abstract
    Information theory "measures quantity of information" and is that branch of applied mathematics that deals with the efficient transmission of messages in an encoded language. It is fundamental to modern methods of telecommunication, image compression, and security. Its relation to library information science is less direct. More relevant to the LIS conception of "quantity of information" are economic concepts related to the expected value of a decision, and the influence of imperfect information on that expected value.
  13. Sun, Y.; Kantor, P.B.: Cross-evaluation : a new model for information system evaluation (2006) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 5048) [ClassicSimilarity], result of:
          0.01886051 = score(doc=5048,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 5048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5048)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the selfjudgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the "threerealities" paradigm, which calls for "real tasks," "real users," and "real systems."
  14. Saracevic, T.; Kantor, P.B.: Studying the value of library and information services : Part I: Establishing a theoretical framework (1997) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 352) [ClassicSimilarity], result of:
          0.016003672 = score(doc=352,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 352, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=352)
      0.33333334 = coord(1/3)
    
    Footnote
    1st part of a study to develop a taxonomy of value-in-use of library and information services based on users assessments and to propose methods and instruments for similar studies of library and information services in general
  15. Kantor, P.B.; Saracevic, T.: Quantitative study of the value of research libraries : a foundation for the evaluation of digital libraries (1999) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 6711) [ClassicSimilarity], result of:
          0.013336393 = score(doc=6711,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 6711, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6711)
      0.33333334 = coord(1/3)
    
    Abstract
    In anticipation of the explosive growth of digital libraries, a complex study was undertaken seeking to evaluate 21 diverse services at 5 major academic research libraries. This work stands as a model for evaluation of digital libraries, through its focus on both the costs of operations and the impacts of the services that those operations provide. The data have been analyzed using both statistical methods and methods of Data Envelopment Analysis. The results of the study, which are presented in detail, serve to demonstrate that a cross-functional approach to library services is feasible. They also highlight a new measure of impact, which is a weighted logarithmic combination of the amount of time that users spend interacting with the service, combined with a Likert-scale indication of the value of that service in relation to the time expended. The measure derived, incorporating simple information obtainable from the user, together with information which is readily available in server/client logs, provides an excellent foundation for transferring these measurement principles to the Digital Library environment