Search (11 results, page 1 of 1)

  • × author_ss:"Kantor, P.B."
  • × year_i:[1990 TO 2000}
  1. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.07
    0.07240096 = product of:
      0.14480191 = sum of:
        0.1106488 = weight(_text_:standards in 3107) [ClassicSimilarity], result of:
          0.1106488 = score(doc=3107,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.49242854 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.078125 = fieldNorm(doc=3107)
        0.03415312 = product of:
          0.06830624 = sum of:
            0.06830624 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06830624 = score(doc=3107,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    27. 2.1999 20:59:22
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  2. Ng, K.B.; Kantor, P.B.: Two experiments on retrieval with corrupted data and clean queries in the TREC4 adhoc task environment : data fusion and pattern scanning (1996) 0.03
    0.033194643 = product of:
      0.13277857 = sum of:
        0.13277857 = weight(_text_:standards in 7571) [ClassicSimilarity], result of:
          0.13277857 = score(doc=7571,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.59091425 = fieldWeight in 7571, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.09375 = fieldNorm(doc=7571)
      0.25 = coord(1/4)
    
    Imprint
    Gaithersburgh, MD : National Institute of Standards and Technology
  3. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.02
    0.019362666 = product of:
      0.03872533 = sum of:
        0.02102358 = weight(_text_:information in 6684) [ClassicSimilarity], result of:
          0.02102358 = score(doc=6684,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23754507 = fieldWeight in 6684, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6684)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 6684) [ClassicSimilarity], result of:
              0.035403505 = score(doc=6684,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 6684, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6684)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  4. Kantor, P.B.; Saracevic, T.: Quantitative study of the value of research libraries : a foundation for the evaluation of digital libraries (1999) 0.02
    0.018446784 = product of:
      0.03689357 = sum of:
        0.019191816 = weight(_text_:information in 6711) [ClassicSimilarity], result of:
          0.019191816 = score(doc=6711,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.21684799 = fieldWeight in 6711, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6711)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 6711) [ClassicSimilarity], result of:
              0.035403505 = score(doc=6711,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 6711, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6711)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In anticipation of the explosive growth of digital libraries, a complex study was undertaken seeking to evaluate 21 diverse services at 5 major academic research libraries. This work stands as a model for evaluation of digital libraries, through its focus on both the costs of operations and the impacts of the services that those operations provide. The data have been analyzed using both statistical methods and methods of Data Envelopment Analysis. The results of the study, which are presented in detail, serve to demonstrate that a cross-functional approach to library services is feasible. They also highlight a new measure of impact, which is a weighted logarithmic combination of the amount of time that users spend interacting with the service, combined with a Likert-scale indication of the value of that service in relation to the time expended. The measure derived, incorporating simple information obtainable from the user, together with information which is readily available in server/client logs, provides an excellent foundation for transferring these measurement principles to the Digital Library environment
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  5. Kantor, P.B.; Nordlie, R.: Models of the behavior of people searching the Internet : a Petri net approach (1999) 0.02
    0.017433718 = product of:
      0.034867436 = sum of:
        0.017165681 = weight(_text_:information in 6712) [ClassicSimilarity], result of:
          0.017165681 = score(doc=6712,freq=8.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 6712, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6712)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 6712) [ClassicSimilarity], result of:
              0.035403505 = score(doc=6712,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 6712, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6712)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Previous models of searching behavior have taken as their foundation the Markov model of random processes. In this model, the next action that a user takes is determined by a probabilistic rule which is conditioned by the most recent experiences of the user. This model, which has achieved very limited success in describing real data, is at odds with the evidence of introspection in a crucial way. Introspection reveals that when we search we are, more or less, in a state of expectancy, which can be satisfied in a number of ways. In addition, the state can be modified by the accumulated evidence of our searches. The Markov model approach can not readily accommodate such persistence of intention and behavior. The Petri Net model, which has been developed to analyze the interdependencies among events in a communications network, can be adapted to this situation. In this adaptation, the so-called "transitions" of the Petri Net occur only when their necessary pre-conditions have been met. We are able to show that various key abstractions of information finding, such as "document relevance", "a desired number of relevant documents", "discouragement", "exhaustion" and "satisfaction" can all be modeled using the Petri Net framework. Further, we show that this model leads naturally to a new approach to the collection of user data, and to the analysis of transaction logs, by providing a far richer description of the user's present state, without inducing a combinatorial explosion
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  6. Shim, W.; Kantor, P.B.: Evaluation of digital libraries : a DEA approach (1999) 0.02
    0.016283836 = product of:
      0.032567672 = sum of:
        0.014865918 = weight(_text_:information in 6701) [ClassicSimilarity], result of:
          0.014865918 = score(doc=6701,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16796975 = fieldWeight in 6701, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6701)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 6701) [ClassicSimilarity], result of:
              0.035403505 = score(doc=6701,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 6701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6701)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Imprint
    Medford, NJ : Information Today
    Series
    Proceedings of the American Society for Information Science; vol.36
    Source
    Knowledge: creation, organization and use. Proceedings of the 62nd Annual Meeting of the American Society for Information Science, 31.10.-4.11.1999. Ed.: L. Woods
  7. Saracevic, T.; Kantor, P.B.: Studying the value of library and information services : Part I: Establishing a theoretical framework (1997) 0.01
    0.008142399 = product of:
      0.032569595 = sum of:
        0.032569595 = weight(_text_:information in 352) [ClassicSimilarity], result of:
          0.032569595 = score(doc=352,freq=20.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.36800325 = fieldWeight in 352, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=352)
      0.25 = coord(1/4)
    
    Abstract
    Discusses underlying concepts related to value that must be clarified in order to proceed with any pragmatic study of value, and establishes a theory of use-oriented value of information and information services. Examines the notion of value in philosophy and economics and in relation to library and information services as well as the connection between value and relevance. Develops 2 models: one related to use of information and the other to use of library and information services. They are a theoretical framework for pragmatic study of value and a guide for the development of a Derived Taxonomy of Value in Using Library and Information Services
    Footnote
    1st part of a study to develop a taxonomy of value-in-use of library and information services based on users assessments and to propose methods and instruments for similar studies of library and information services in general
    Source
    Journal of the American Society for Information Science. 48(1997) no.6, S.527-542
  8. Kantor, P.B.: Information retrieval techniques (1994) 0.01
    0.006812419 = product of:
      0.027249675 = sum of:
        0.027249675 = weight(_text_:information in 1056) [ClassicSimilarity], result of:
          0.027249675 = score(doc=1056,freq=14.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3078936 = fieldWeight in 1056, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1056)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of information retrieval techniques viewed in terms of the growing effort to implement concept based retrieval in content based algorithms. Identifies trends in the automation of indexing, retrieval, and the interaction between systems and users. Identifies 3 central issues: ways in which systems describe documents for purposes of information retrieval; ways in which systems compute the degree of match between a given document and the current state of the query; amd what the systems do with the information that they obtain from the users. Looks at information retrieval techniques in terms of: location, navigation; indexing; documents; queries; structures; concepts; matching documents to queries; restoring query structure; algorithms and content versus concepts; formulation of concepts in terms of contents; formulation of concepts with the assistance of the users; complex system codes versus underlying principles; and system evaluation
    Imprint
    Medford, NJ : Learned Information Inc.
    Source
    Annual review of information science and technology. 29(1994), S.53-90
  9. Saracevic, T.; Kantor, P.B.: Studying the value of library and information services : Part II: Methodology and taxonomy (1997) 0.01
    0.005757545 = product of:
      0.02303018 = sum of:
        0.02303018 = weight(_text_:information in 353) [ClassicSimilarity], result of:
          0.02303018 = score(doc=353,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2602176 = fieldWeight in 353, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=353)
      0.25 = coord(1/4)
    
    Abstract
    Details with specifics of the study: importance of taxonomy; the method used for gathering data on user assessments of value in 5 research libraries, involving 18 services and 528 interviews with users; development and presentation of the taxonomy; and statistics and tests of the taxonomy. A novel aspect is the division of value of information services into 3 general classes or facets; reasons for use of a service in the given instance; quality of interaction (use) related to that service; and worth, benefits, or implications of subsequent results from use
    Footnote
    2nd part of a study to develop a taxonomy of value-in-use of library and information services based on users assessments and to propose methods and instruments for similar studies of library and information services in general
    Source
    Journal of the American Society for Information Science. 48(1997) no.6, S.543-563
  10. Kantor, P.B.; Lee, J.J.: Testing the maximum entropy principle for information retrieval (1998) 0.00
    0.004855188 = product of:
      0.019420752 = sum of:
        0.019420752 = weight(_text_:information in 3266) [ClassicSimilarity], result of:
          0.019420752 = score(doc=3266,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.21943474 = fieldWeight in 3266, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3266)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 49(1998) no.6, S.557-566
  11. Kantor, P.B.: ¬The Adaptive Network Library Interface : a historical overview and interim report (1993) 0.00
    0.0030039945 = product of:
      0.012015978 = sum of:
        0.012015978 = weight(_text_:information in 6976) [ClassicSimilarity], result of:
          0.012015978 = score(doc=6976,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13576832 = fieldWeight in 6976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6976)
      0.25 = coord(1/4)
    
    Abstract
    Describes the evolution of the concept of an Adaptive Network Library Interface (ANLI) and explores several technical and research issues. The ANLI is a computer program that stands as a buffer between users of the library catalogue and the catalogue itself. This buffer unit maintains its own network of pointers from book to book, which it elicits from the users, interactively. It is hoped that such a buffer increases the value of the catalogue for users and provides librarians with new and useful information about the books in the collection. Explores the relationship between this system and hypertext and neural networks