Search (6 results, page 1 of 1)

  • × author_ss:"Kantor, P.B."
  1. Kantor, P.B.: Mathematical models in information science (2002) 0.01
    0.011605277 = product of:
      0.03481583 = sum of:
        0.03481583 = product of:
          0.06963166 = sum of:
            0.06963166 = weight(_text_:22 in 4112) [ClassicSimilarity], result of:
              0.06963166 = score(doc=4112,freq=2.0), product of:
                0.12855195 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03670994 = queryNorm
                0.5416616 = fieldWeight in 4112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4112)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Bulletin of the American Society for Information Science. 28(2002) no.6, S.22-24
  2. Kantor, P.B.; Saracevic, T.: Quantitative study of the value of research libraries : a foundation for the evaluation of digital libraries (1999) 0.01
    0.01051799 = product of:
      0.03155397 = sum of:
        0.03155397 = product of:
          0.06310794 = sum of:
            0.06310794 = weight(_text_:digital in 6711) [ClassicSimilarity], result of:
              0.06310794 = score(doc=6711,freq=8.0), product of:
                0.14480425 = queryWeight, product of:
                  3.944552 = idf(docFreq=2326, maxDocs=44218)
                  0.03670994 = queryNorm
                0.4358155 = fieldWeight in 6711, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.944552 = idf(docFreq=2326, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6711)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In anticipation of the explosive growth of digital libraries, a complex study was undertaken seeking to evaluate 21 diverse services at 5 major academic research libraries. This work stands as a model for evaluation of digital libraries, through its focus on both the costs of operations and the impacts of the services that those operations provide. The data have been analyzed using both statistical methods and methods of Data Envelopment Analysis. The results of the study, which are presented in detail, serve to demonstrate that a cross-functional approach to library services is feasible. They also highlight a new measure of impact, which is a weighted logarithmic combination of the amount of time that users spend interacting with the service, combined with a Likert-scale indication of the value of that service in relation to the time expended. The measure derived, incorporating simple information obtainable from the user, together with information which is readily available in server/client logs, provides an excellent foundation for transferring these measurement principles to the Digital Library environment
  3. Shim, W.; Kantor, P.B.: Evaluation of digital libraries : a DEA approach (1999) 0.01
    0.009108848 = product of:
      0.027326543 = sum of:
        0.027326543 = product of:
          0.054653086 = sum of:
            0.054653086 = weight(_text_:digital in 6701) [ClassicSimilarity], result of:
              0.054653086 = score(doc=6701,freq=6.0), product of:
                0.14480425 = queryWeight, product of:
                  3.944552 = idf(docFreq=2326, maxDocs=44218)
                  0.03670994 = queryNorm
                0.37742734 = fieldWeight in 6701, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.944552 = idf(docFreq=2326, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6701)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    As libraries evolve from paper based to digitized collection, traditional measurement activities must change. To demonstrate the growth in library value during this transition period, libraries must be able to describe how library inputs are transformed into the services libraries render. We apply a complex tool, data envelopment analysis (DEA), to evaluate the relative efficiency of major academic research libraries that are members of the Association of Research Libraries (ARL). An efficient library is defined as the one which produces same output with less input or, for a given input, produces more output. We report the results of a two-year base line study using traditional measures taken from 1995-1996 ARL statistics. We observe the patterns of efficiency scores of both individual libraries and libraries in peer groups (private vs. public). In particular we study the consistency over the years of specific DEA measures. This consistency provides justification for extending DEA as libraries undergo revolutionary digital transformation. The results are also corroborated using standard statistical measures. DEA application in the new digital library environment is discussed
  4. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.008289483 = product of:
      0.02486845 = sum of:
        0.02486845 = product of:
          0.0497369 = sum of:
            0.0497369 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.0497369 = score(doc=3107,freq=2.0), product of:
                0.12855195 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03670994 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 2.1999 20:59:22
  5. Boros, E.; Kantor, P.B.; Neu, D.J.: Pheromonic representation of user quests by digital structures (1999) 0.01
    0.0074373423 = product of:
      0.022312026 = sum of:
        0.022312026 = product of:
          0.044624053 = sum of:
            0.044624053 = weight(_text_:digital in 6684) [ClassicSimilarity], result of:
              0.044624053 = score(doc=6684,freq=4.0), product of:
                0.14480425 = queryWeight, product of:
                  3.944552 = idf(docFreq=2326, maxDocs=44218)
                  0.03670994 = queryNorm
                0.3081681 = fieldWeight in 6684, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.944552 = idf(docFreq=2326, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6684)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In a novel approach to information finding in networked environments, each user's specific purpose or "quest" can be represented in numerous ways. The most familiar is a list of keywords, or a natural language sentence or paragraph. More effective is an extended text that has been judged as to relevance. This forms the basis of relevance feedback, as it is used in information retrieval. In the "Ant World" project (Ant World, 1999; Kantor et al., 1999b; Kantor et al., 1999a), the items to be retrieved are not documents, but rather quests, represented by entire collections of judged documents. In order to save space and time we have developed methods for representing these complex entities in a short string of about 1,000 bytes, which we call a "Digital Information Pheromone" (DIP). The principles for determining the DIP for a given quest, and for matching DIPs to each other are presented. The effectiveness of this scheme is explored with some applications to the large judged collections of TREC documents
  6. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.01
    0.0058026384 = product of:
      0.017407915 = sum of:
        0.017407915 = product of:
          0.03481583 = sum of:
            0.03481583 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
              0.03481583 = score(doc=5267,freq=2.0), product of:
                0.12855195 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03670994 = queryNorm
                0.2708308 = fieldWeight in 5267, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5267)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 15:05:39