Search (17 results, page 1 of 1)

  • × author_ss:"Losee, R.M."
  1. Spink, A.; Losee, R.M.: Feedback in information retrieval (1996) 0.05
    0.051779844 = product of:
      0.10355969 = sum of:
        0.030948812 = weight(_text_:science in 7441) [ClassicSimilarity], result of:
          0.030948812 = score(doc=7441,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.23282544 = fieldWeight in 7441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=7441)
        0.07261088 = weight(_text_:research in 7441) [ClassicSimilarity], result of:
          0.07261088 = score(doc=7441,freq=8.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.504341 = fieldWeight in 7441, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0625 = fieldNorm(doc=7441)
      0.5 = coord(2/4)
    
    Abstract
    State of the art review of the mechanisms of feedback in information retrieval (IR) in terms of feedback concepts and models in cybernetics and social sciences. Critically evaluates feedback research based on the traditional IR models and comparing the different approaches to automatic relevance feedback techniques, and feedback research within the framework of interactive IR models. Calls for an extension of the concept of feedback beyond relevance feedback to interactive feedback. Cites specific examples of feedback models used within IR research and presents 6 challenges to future research
    Source
    Annual review of information science and technology. 31(1996), S.33-78
  2. Losee, R.M.: When information retrieval measures agree about the relative quality of document rankings (2000) 0.03
    0.03289094 = product of:
      0.06578188 = sum of:
        0.027080212 = weight(_text_:science in 4860) [ClassicSimilarity], result of:
          0.027080212 = score(doc=4860,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20372227 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4860)
        0.03870166 = product of:
          0.07740332 = sum of:
            0.07740332 = weight(_text_:network in 4860) [ClassicSimilarity], result of:
              0.07740332 = score(doc=4860,freq=2.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.3444231 = fieldWeight in 4860, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4860)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The variety of performance measures available for information retrieval systems, search engines, and network filtering agents can be confusing to both practitioners and scholars. Most discussions about these measures address their theoretical foundations and the characteristics of a measure that make it desirable for a particular application. In this work, we consider how measures of performance at a point in a search may be formally compared. Criteria are developed that allow one to determine the percent of time or conditions under which 2 different performance measures suggest that one document ordering is superior to another ordering, or when the 2 measures disagree about the relative value of document orderings. As an example, graphs provide illustrations of the relationships between precision and F
    Source
    Journal of the American Society for Information Science. 51(2000) no.9, S.834-840
  3. Losee, R.M.: ¬The science of information : measurement and applications (1990) 0.02
    0.023211608 = product of:
      0.09284643 = sum of:
        0.09284643 = weight(_text_:science in 813) [ClassicSimilarity], result of:
          0.09284643 = score(doc=813,freq=8.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.6984763 = fieldWeight in 813, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.09375 = fieldNorm(doc=813)
      0.25 = coord(1/4)
    
    COMPASS
    Information science
    Series
    Library and information science
    Subject
    Information science
  4. Willis, C.; Losee, R.M.: ¬A random walk on an ontology : using thesaurus structure for automatic subject indexing (2013) 0.02
    0.016813563 = product of:
      0.033627126 = sum of:
        0.015474406 = weight(_text_:science in 1016) [ClassicSimilarity], result of:
          0.015474406 = score(doc=1016,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.11641272 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
        0.01815272 = weight(_text_:research in 1016) [ClassicSimilarity], result of:
          0.01815272 = score(doc=1016,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.12608525 = fieldWeight in 1016, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=1016)
      0.5 = coord(2/4)
    
    Abstract
    Relationships between terms and features are an essential component of thesauri, ontologies, and a range of controlled vocabularies. In this article, we describe ways to identify important concepts in documents using the relationships in a thesaurus or other vocabulary structures. We introduce a methodology for the analysis and modeling of the indexing process based on a weighted random walk algorithm. The primary goal of this research is the analysis of the contribution of thesaurus structure to the indexing process. The resulting models are evaluated in the context of automatic subject indexing using four collections of documents pre-indexed with 4 different thesauri (AGROVOC [UN Food and Agriculture Organization], high-energy physics taxonomy [HEP], National Agricultural Library Thesaurus [NALT], and medical subject headings [MeSH]). We also introduce a thesaurus-centric matching algorithm intended to improve the quality of candidate concepts. In all cases, the weighted random walk improves automatic indexing performance over matching alone with an increase in average precision (AP) of 9% for HEP, 11% for MeSH, 35% for NALT, and 37% for AGROVOC. The results of the analysis support our hypothesis that subject indexing is in part a browsing process, and that using the vocabulary and its structure in a thesaurus contributes to the indexing process. The amount that the vocabulary structure contributes was found to differ among the 4 thesauri, possibly due to the vocabulary used in the corresponding thesauri and the structural relationships between the terms. Each of the thesauri and the manual indexing associated with it is characterized using the methods developed here.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1330-1344
  5. Losee, R.M.; Paris, L.A.H.: Measuring search-engine quality and query difficulty : ranking with Target and Freestyle (1999) 0.01
    0.011605804 = product of:
      0.046423215 = sum of:
        0.046423215 = weight(_text_:science in 4310) [ClassicSimilarity], result of:
          0.046423215 = score(doc=4310,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.34923816 = fieldWeight in 4310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.09375 = fieldNorm(doc=4310)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 50(1999) no.10, S.882-889
  6. Losee, R.M.: Seven fundamental questions for the science of library classification (1993) 0.01
    0.010942058 = product of:
      0.04376823 = sum of:
        0.04376823 = weight(_text_:science in 4508) [ClassicSimilarity], result of:
          0.04376823 = score(doc=4508,freq=4.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.3292649 = fieldWeight in 4508, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=4508)
      0.25 = coord(1/4)
    
    Abstract
    For classification to advance to the point where optimal systems may be developed for manual or automated use, it will be necessary for a science of document or library classification to be developed. Seven questions are posed which the author feels must be answered before such optimal systems can be developed. Suggestions are made as to the forms that answers to these questions might take
  7. Losee, R.M.; Haas, S.W.: Sublanguage terms : dictionaries, usage, and automatic classification (1995) 0.01
    0.010942058 = product of:
      0.04376823 = sum of:
        0.04376823 = weight(_text_:science in 2650) [ClassicSimilarity], result of:
          0.04376823 = score(doc=2650,freq=4.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.3292649 = fieldWeight in 2650, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=2650)
      0.25 = coord(1/4)
    
    Abstract
    The use of terms from natural and social science titles and abstracts is studied from the perspective of sublanguages and their specialized dictionaries. Explores different notions of sublanguage distinctiveness. Object methods for separating hard and soft sciences are suggested based on measures of sublanguage use, dictionary characteristics, and sublanguage distinctiveness. Abstracts were automatically classified with a high degree of accuracy by using a formula that condsiders the degree of uniqueness of terms in each sublanguage. This may prove useful for text filtering of information retrieval systems
    Source
    Journal of the American Society for Information Science. 46(1995) no.7, S.519-529
  8. Losee, R.M.: ¬The relative shelf location of circulated books : a study of classification, users, and browsing (1993) 0.01
    0.007941814 = product of:
      0.031767257 = sum of:
        0.031767257 = weight(_text_:research in 4485) [ClassicSimilarity], result of:
          0.031767257 = score(doc=4485,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.22064918 = fieldWeight in 4485, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4485)
      0.25 = coord(1/4)
    
    Abstract
    Patrons often browse through books organized by a library classification system, looking for books to use and possibly circulate. This research is an examination of the clustering of similar books provided by a classification system and ways in which the books that patrons circulate are clustered. Measures of classification system performance are suggested and used to evaluate two test collections. Regression formulas are derived describing the relationships among the number of areas in which books were found (the number of stops a patron makes when browsing), the distances across a cluster, and the average number of books a patron circulates. Patrons were found usually to make more stops than there were books found at their average stop. Consequences for full-text document systems and online catalogs are suggested
  9. Losee, R.M.: ¬A discipline independent definition of information (1997) 0.01
    0.007737203 = product of:
      0.030948812 = sum of:
        0.030948812 = weight(_text_:science in 380) [ClassicSimilarity], result of:
          0.030948812 = score(doc=380,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.23282544 = fieldWeight in 380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=380)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 48(1997) no.3, S.254-269
  10. Losee, R.M.: Upper bounds for retrieval performance and their user measuring performance and generating optimal queries : can it get any better than this? (1994) 0.01
    0.0068072695 = product of:
      0.027229078 = sum of:
        0.027229078 = weight(_text_:research in 7418) [ClassicSimilarity], result of:
          0.027229078 = score(doc=7418,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.18912788 = fieldWeight in 7418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=7418)
      0.25 = coord(1/4)
    
    Abstract
    The best-case, random and worst-case document rankings and retrieval performance may be determined using a method discussed here. Knowledge of the best case performance allows users and system designers to determine how close to the optimum condition their search is and select queries and matching functions that will produce the best results. Suggests a method for deriving the optimal Boolean query for a given level of recall and a method for determining the quality of a Boolean query. Measures are proposed that modify conventional text retrieval measures such as precision, E, and average search length, so that the values for these measures are 1 when retrieval is optimal, 0 when retrieval is random, and -1 when worst-case. Tests using one of these measures show that many retrieval are optimal? Consequences for retrieval research are examined
  11. Losee, R.M.: ¬A Gray code based ordering for documents on shelves : classification for browsing and retrieval (1992) 0.01
    0.006770053 = product of:
      0.027080212 = sum of:
        0.027080212 = weight(_text_:science in 2335) [ClassicSimilarity], result of:
          0.027080212 = score(doc=2335,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20372227 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2335)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 43(1992) no.4, S.312-322
  12. Losee, R.M.: Comparing Boolean and probabilistic information retrieval systems across queries and disciplines (1997) 0.01
    0.006770053 = product of:
      0.027080212 = sum of:
        0.027080212 = weight(_text_:science in 7709) [ClassicSimilarity], result of:
          0.027080212 = score(doc=7709,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20372227 = fieldWeight in 7709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7709)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 48(1997) no.2, S.143-156
  13. Losee, R.M.; Church Jr., L.: Are two document clusters better than one? : the cluster performance question for information retrieval (2005) 0.01
    0.006770053 = product of:
      0.027080212 = sum of:
        0.027080212 = weight(_text_:science in 3270) [ClassicSimilarity], result of:
          0.027080212 = score(doc=3270,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20372227 = fieldWeight in 3270, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3270)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.1, S.106-108
  14. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.01
    0.0059824795 = product of:
      0.023929918 = sum of:
        0.023929918 = product of:
          0.047859836 = sum of:
            0.047859836 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.047859836 = score(doc=3368,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.1996 13:14:10
  15. Losee, R.M.: Evaluating retrieval performance given database and query characteristics : analytic determination of performance surfaces (1996) 0.01
    0.005802902 = product of:
      0.023211608 = sum of:
        0.023211608 = weight(_text_:science in 4162) [ClassicSimilarity], result of:
          0.023211608 = score(doc=4162,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 4162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4162)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.95-105
  16. Losee, R.M.: Term dependence : a basis for Luhn and Zipf models (2001) 0.01
    0.005802902 = product of:
      0.023211608 = sum of:
        0.023211608 = weight(_text_:science in 6976) [ClassicSimilarity], result of:
          0.023211608 = score(doc=6976,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 6976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=6976)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and technology. 52(2001) no.12, S.1019-1025
  17. Losee, R.M.: ¬The effect of assigning a metadata or indexing term on document ordering (2013) 0.01
    0.005802902 = product of:
      0.023211608 = sum of:
        0.023211608 = weight(_text_:science in 1100) [ClassicSimilarity], result of:
          0.023211608 = score(doc=1100,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.17461908 = fieldWeight in 1100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1100)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2191-2200