Search (26 results, page 2 of 2)

  • × author_ss:"Losee, R.M."
  1. Losee, R.M.: ¬A discipline independent definition of information (1997) 0.00
    0.0031204377 = product of:
      0.02808394 = sum of:
        0.02808394 = weight(_text_:of in 380) [ClassicSimilarity], result of:
          0.02808394 = score(doc=380,freq=22.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.458417 = fieldWeight in 380, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=380)
      0.11111111 = coord(1/9)
    
    Abstract
    Information may be defined as the characteristics of the output of a process, these being informative about the process and the input. This discipline independent definition may be applied to all domains, from physics to epistemology. Hierarchies of processes linked together, provide a communication channel between each of the corresponding functions and layers in the hierarchies. Models of communication, perception, observation, belief, and knowledge are suggested that are consistent with this conceptual framework of information as the value of the output of any process in a hierarchy of processes. Misinformation and errors are considered
    Source
    Journal of the American Society for Information Science. 48(1997) no.3, S.254-269
  2. Spink, A.; Losee, R.M.: Feedback in information retrieval (1996) 0.00
    0.0028225419 = product of:
      0.025402876 = sum of:
        0.025402876 = weight(_text_:of in 7441) [ClassicSimilarity], result of:
          0.025402876 = score(doc=7441,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.41465375 = fieldWeight in 7441, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7441)
      0.11111111 = coord(1/9)
    
    Abstract
    State of the art review of the mechanisms of feedback in information retrieval (IR) in terms of feedback concepts and models in cybernetics and social sciences. Critically evaluates feedback research based on the traditional IR models and comparing the different approaches to automatic relevance feedback techniques, and feedback research within the framework of interactive IR models. Calls for an extension of the concept of feedback beyond relevance feedback to interactive feedback. Cites specific examples of feedback models used within IR research and presents 6 challenges to future research
    Source
    Annual review of information science and technology. 31(1996), S.33-78
  3. Losee, R.M.: Evaluating retrieval performance given database and query characteristics : analytic determination of performance surfaces (1996) 0.00
    0.0024443932 = product of:
      0.021999538 = sum of:
        0.021999538 = weight(_text_:of in 4162) [ClassicSimilarity], result of:
          0.021999538 = score(doc=4162,freq=24.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.3591007 = fieldWeight in 4162, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4162)
      0.11111111 = coord(1/9)
    
    Abstract
    An analytic method of information retrieval and filtering evaluation can quantitatively predict the expected number of documents examined in retrieving a relevant document. It also allows researchers and practioners to qualitatively understand how varying different estimates of query parameter values affects retrieval performance. The incoorporation of relevance feedback to increase our knowledge about the parameters of relevant documents and the robustness of parameter estimates is modeled. Single term and two term independence models, as well as a complete term dependence model, are developed. An economic model of retrieval performance may be used to study the effects of database size and to provide analytic answers to questions comparing retrieval from small and large databases, as well as questions about the number of terms in a query. Results are presented as a performance surface, a three dimensional graph showing the effects of two independent variables on performance.
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.95-105
  4. Willis, C.; Losee, R.M.: ¬A random walk on an ontology : using thesaurus structure for automatic subject indexing (2013) 0.00
    0.0016961367 = product of:
      0.01526523 = sum of:
        0.01526523 = weight(_text_:of in 1016) [ClassicSimilarity], result of:
          0.01526523 = score(doc=1016,freq=26.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2491759 = fieldWeight in 1016, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
      0.11111111 = coord(1/9)
    
    Abstract
    Relationships between terms and features are an essential component of thesauri, ontologies, and a range of controlled vocabularies. In this article, we describe ways to identify important concepts in documents using the relationships in a thesaurus or other vocabulary structures. We introduce a methodology for the analysis and modeling of the indexing process based on a weighted random walk algorithm. The primary goal of this research is the analysis of the contribution of thesaurus structure to the indexing process. The resulting models are evaluated in the context of automatic subject indexing using four collections of documents pre-indexed with 4 different thesauri (AGROVOC [UN Food and Agriculture Organization], high-energy physics taxonomy [HEP], National Agricultural Library Thesaurus [NALT], and medical subject headings [MeSH]). We also introduce a thesaurus-centric matching algorithm intended to improve the quality of candidate concepts. In all cases, the weighted random walk improves automatic indexing performance over matching alone with an increase in average precision (AP) of 9% for HEP, 11% for MeSH, 35% for NALT, and 37% for AGROVOC. The results of the analysis support our hypothesis that subject indexing is in part a browsing process, and that using the vocabulary and its structure in a thesaurus contributes to the indexing process. The amount that the vocabulary structure contributes was found to differ among the 4 thesauri, possibly due to the vocabulary used in the corresponding thesauri and the structural relationships between the terms. Each of the thesauri and the manual indexing associated with it is characterized using the methods developed here.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1330-1344
  5. Losee, R.M.: Upper bounds for retrieval performance and their user measuring performance and generating optimal queries : can it get any better than this? (1994) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 7418) [ClassicSimilarity], result of:
          0.012701439 = score(doc=7418,freq=8.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 7418, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=7418)
      0.11111111 = coord(1/9)
    
    Abstract
    The best-case, random and worst-case document rankings and retrieval performance may be determined using a method discussed here. Knowledge of the best case performance allows users and system designers to determine how close to the optimum condition their search is and select queries and matching functions that will produce the best results. Suggests a method for deriving the optimal Boolean query for a given level of recall and a method for determining the quality of a Boolean query. Measures are proposed that modify conventional text retrieval measures such as precision, E, and average search length, so that the values for these measures are 1 when retrieval is optimal, 0 when retrieval is random, and -1 when worst-case. Tests using one of these measures show that many retrieval are optimal? Consequences for retrieval research are examined
  6. Losee, R.M.; Paris, L.A.H.: Measuring search-engine quality and query difficulty : ranking with Target and Freestyle (1999) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 4310) [ClassicSimilarity], result of:
          0.012701439 = score(doc=4310,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 4310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4310)
      0.11111111 = coord(1/9)
    
    Source
    Journal of the American Society for Information Science. 50(1999) no.10, S.882-889