Search (9 results, page 1 of 1)

  • × author_ss:"Kantor, P.B."
  1. Kantor, P.B.: ¬The Adaptive Network Library Interface : a historical overview and interim report (1993) 0.06
    0.055580065 = product of:
      0.1667402 = sum of:
        0.046679016 = weight(_text_:computer in 6976) [ClassicSimilarity], result of:
          0.046679016 = score(doc=6976,freq=2.0), product of:
            0.16515417 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.045191888 = queryNorm
            0.28263903 = fieldWeight in 6976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6976)
        0.12006118 = weight(_text_:network in 6976) [ClassicSimilarity], result of:
          0.12006118 = score(doc=6976,freq=6.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.59655833 = fieldWeight in 6976, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6976)
      0.33333334 = coord(2/6)
    
    Abstract
    Describes the evolution of the concept of an Adaptive Network Library Interface (ANLI) and explores several technical and research issues. The ANLI is a computer program that stands as a buffer between users of the library catalogue and the catalogue itself. This buffer unit maintains its own network of pointers from book to book, which it elicits from the users, interactively. It is hoped that such a buffer increases the value of the catalogue for users and provides librarians with new and useful information about the books in the collection. Explores the relationship between this system and hypertext and neural networks
  2. Saracevic, T.; Kantor, P.B.: Studying the value of library and information services : Part I: Establishing a theoretical framework (1997) 0.02
    0.017806191 = product of:
      0.106837146 = sum of:
        0.106837146 = weight(_text_:services in 352) [ClassicSimilarity], result of:
          0.106837146 = score(doc=352,freq=14.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.64392245 = fieldWeight in 352, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=352)
      0.16666667 = coord(1/6)
    
    Abstract
    Discusses underlying concepts related to value that must be clarified in order to proceed with any pragmatic study of value, and establishes a theory of use-oriented value of information and information services. Examines the notion of value in philosophy and economics and in relation to library and information services as well as the connection between value and relevance. Develops 2 models: one related to use of information and the other to use of library and information services. They are a theoretical framework for pragmatic study of value and a guide for the development of a Derived Taxonomy of Value in Using Library and Information Services
    Footnote
    1st part of a study to develop a taxonomy of value-in-use of library and information services based on users assessments and to propose methods and instruments for similar studies of library and information services in general
  3. Saracevic, T.; Kantor, P.B.: Studying the value of library and information services : Part II: Methodology and taxonomy (1997) 0.02
    0.015048979 = product of:
      0.09029387 = sum of:
        0.09029387 = weight(_text_:services in 353) [ClassicSimilarity], result of:
          0.09029387 = score(doc=353,freq=10.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.5442138 = fieldWeight in 353, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.046875 = fieldNorm(doc=353)
      0.16666667 = coord(1/6)
    
    Abstract
    Details with specifics of the study: importance of taxonomy; the method used for gathering data on user assessments of value in 5 research libraries, involving 18 services and 528 interviews with users; development and presentation of the taxonomy; and statistics and tests of the taxonomy. A novel aspect is the division of value of information services into 3 general classes or facets; reasons for use of a service in the given instance; quality of interaction (use) related to that service; and worth, benefits, or implications of subsequent results from use
    Footnote
    2nd part of a study to develop a taxonomy of value-in-use of library and information services based on users assessments and to propose methods and instruments for similar studies of library and information services in general
  4. Kantor, P.B.; Saracevic, T.: Quantitative study of the value of research libraries : a foundation for the evaluation of digital libraries (1999) 0.01
    0.009714074 = product of:
      0.058284443 = sum of:
        0.058284443 = weight(_text_:services in 6711) [ClassicSimilarity], result of:
          0.058284443 = score(doc=6711,freq=6.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.3512885 = fieldWeight in 6711, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6711)
      0.16666667 = coord(1/6)
    
    Abstract
    In anticipation of the explosive growth of digital libraries, a complex study was undertaken seeking to evaluate 21 diverse services at 5 major academic research libraries. This work stands as a model for evaluation of digital libraries, through its focus on both the costs of operations and the impacts of the services that those operations provide. The data have been analyzed using both statistical methods and methods of Data Envelopment Analysis. The results of the study, which are presented in detail, serve to demonstrate that a cross-functional approach to library services is feasible. They also highlight a new measure of impact, which is a weighted logarithmic combination of the amount of time that users spend interacting with the service, combined with a Likert-scale indication of the value of that service in relation to the time expended. The measure derived, incorporating simple information obtainable from the user, together with information which is readily available in server/client logs, provides an excellent foundation for transferring these measurement principles to the Digital Library environment
  5. Kantor, P.B.; Nordlie, R.: Models of the behavior of people searching the Internet : a Petri net approach (1999) 0.01
    0.008252066 = product of:
      0.049512394 = sum of:
        0.049512394 = weight(_text_:network in 6712) [ClassicSimilarity], result of:
          0.049512394 = score(doc=6712,freq=2.0), product of:
            0.2012564 = queryWeight, product of:
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.045191888 = queryNorm
            0.2460165 = fieldWeight in 6712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4533744 = idf(docFreq=1398, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6712)
      0.16666667 = coord(1/6)
    
    Abstract
    Previous models of searching behavior have taken as their foundation the Markov model of random processes. In this model, the next action that a user takes is determined by a probabilistic rule which is conditioned by the most recent experiences of the user. This model, which has achieved very limited success in describing real data, is at odds with the evidence of introspection in a crucial way. Introspection reveals that when we search we are, more or less, in a state of expectancy, which can be satisfied in a number of ways. In addition, the state can be modified by the accumulated evidence of our searches. The Markov model approach can not readily accommodate such persistence of intention and behavior. The Petri Net model, which has been developed to analyze the interdependencies among events in a communications network, can be adapted to this situation. In this adaptation, the so-called "transitions" of the Petri Net occur only when their necessary pre-conditions have been met. We are able to show that various key abstractions of information finding, such as "document relevance", "a desired number of relevant documents", "discouragement", "exhaustion" and "satisfaction" can all be modeled using the Petri Net framework. Further, we show that this model leads naturally to a new approach to the collection of user data, and to the analysis of transaction logs, by providing a far richer description of the user's present state, without inducing a combinatorial explosion
  6. Kantor, P.B.: Mathematical models in information science (2002) 0.01
    0.007143357 = product of:
      0.04286014 = sum of:
        0.04286014 = product of:
          0.08572028 = sum of:
            0.08572028 = weight(_text_:22 in 4112) [ClassicSimilarity], result of:
              0.08572028 = score(doc=4112,freq=2.0), product of:
                0.1582543 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191888 = queryNorm
                0.5416616 = fieldWeight in 4112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4112)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Bulletin of the American Society for Information Science. 28(2002) no.6, S.22-24
  7. Shim, W.; Kantor, P.B.: Evaluation of digital libraries : a DEA approach (1999) 0.01
    0.0056084236 = product of:
      0.03365054 = sum of:
        0.03365054 = weight(_text_:services in 6701) [ClassicSimilarity], result of:
          0.03365054 = score(doc=6701,freq=2.0), product of:
            0.16591617 = queryWeight, product of:
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.045191888 = queryNorm
            0.2028165 = fieldWeight in 6701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6713707 = idf(docFreq=3057, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6701)
      0.16666667 = coord(1/6)
    
    Abstract
    As libraries evolve from paper based to digitized collection, traditional measurement activities must change. To demonstrate the growth in library value during this transition period, libraries must be able to describe how library inputs are transformed into the services libraries render. We apply a complex tool, data envelopment analysis (DEA), to evaluate the relative efficiency of major academic research libraries that are members of the Association of Research Libraries (ARL). An efficient library is defined as the one which produces same output with less input or, for a given input, produces more output. We report the results of a two-year base line study using traditional measures taken from 1995-1996 ARL statistics. We observe the patterns of efficiency scores of both individual libraries and libraries in peer groups (private vs. public). In particular we study the consistency over the years of specific DEA measures. This consistency provides justification for extending DEA as libraries undergo revolutionary digital transformation. The results are also corroborated using standard statistical measures. DEA application in the new digital library environment is discussed
  8. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.005102398 = product of:
      0.030614385 = sum of:
        0.030614385 = product of:
          0.06122877 = sum of:
            0.06122877 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.06122877 = score(doc=3107,freq=2.0), product of:
                0.1582543 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191888 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    27. 2.1999 20:59:22
  9. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.00
    0.0035716784 = product of:
      0.02143007 = sum of:
        0.02143007 = product of:
          0.04286014 = sum of:
            0.04286014 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
              0.04286014 = score(doc=5267,freq=2.0), product of:
                0.1582543 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045191888 = queryNorm
                0.2708308 = fieldWeight in 5267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5267)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2006 15:05:39