Search (12 results, page 1 of 1)

  • × author_ss:"Losee, R.M."
  1. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.09
    0.088832036 = product of:
      0.13324805 = sum of:
        0.045130465 = weight(_text_:management in 3368) [ClassicSimilarity], result of:
          0.045130465 = score(doc=3368,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 3368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3368)
        0.08811758 = sum of:
          0.039404877 = weight(_text_:system in 3368) [ClassicSimilarity], result of:
            0.039404877 = score(doc=3368,freq=2.0), product of:
              0.16177002 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.051362853 = queryNorm
              0.2435858 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
          0.0487127 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
            0.0487127 = score(doc=3368,freq=2.0), product of:
              0.17986396 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051362853 = queryNorm
              0.2708308 = fieldWeight in 3368, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3368)
      0.6666667 = coord(2/3)
    
    Abstract
    The performance of an information retrieval or text and media filtering system may be determined through analytic methods as well as by traditional simulation or experimental methods. These analytic methods can provide precise statements about expected performance. They can thus determine which of 2 similarly performing systems is superior. For both a single query terms and for a multiple query term retrieval model, a model for comparing the performance of different probabilistic retrieval methods is developed. This method may be used in computing the average search length for a query, given only knowledge of database parameter values. Describes predictive models for inverse document frequency, binary independence, and relevance feedback based retrieval and filtering. Simulation illustrate how the single term model performs and sample performance predictions are given for single term and multiple term problems
    Date
    22. 2.1996 13:14:10
    Source
    Information processing and management. 31(1995) no.4, S.555-572
  2. Losee, R.M.: Upper bounds for retrieval performance and their user measuring performance and generating optimal queries : can it get any better than this? (1994) 0.04
    0.03704738 = product of:
      0.055571064 = sum of:
        0.038683258 = weight(_text_:management in 7418) [ClassicSimilarity], result of:
          0.038683258 = score(doc=7418,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 7418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=7418)
        0.016887804 = product of:
          0.03377561 = sum of:
            0.03377561 = weight(_text_:system in 7418) [ClassicSimilarity], result of:
              0.03377561 = score(doc=7418,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.20878783 = fieldWeight in 7418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7418)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The best-case, random and worst-case document rankings and retrieval performance may be determined using a method discussed here. Knowledge of the best case performance allows users and system designers to determine how close to the optimum condition their search is and select queries and matching functions that will produce the best results. Suggests a method for deriving the optimal Boolean query for a given level of recall and a method for determining the quality of a Boolean query. Measures are proposed that modify conventional text retrieval measures such as precision, E, and average search length, so that the values for these measures are 1 when retrieval is optimal, 0 when retrieval is random, and -1 when worst-case. Tests using one of these measures show that many retrieval are optimal? Consequences for retrieval research are examined
    Source
    Information processing and management. 30(1994) no.2, S.193-203
  3. Losee, R.M.: Learning syntactic rules and tags with genetic algorithms for information retrieval and filtering : an empirical basis for grammatical rules (1996) 0.04
    0.03704738 = product of:
      0.055571064 = sum of:
        0.038683258 = weight(_text_:management in 4068) [ClassicSimilarity], result of:
          0.038683258 = score(doc=4068,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 4068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4068)
        0.016887804 = product of:
          0.03377561 = sum of:
            0.03377561 = weight(_text_:system in 4068) [ClassicSimilarity], result of:
              0.03377561 = score(doc=4068,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.20878783 = fieldWeight in 4068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4068)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The grammars of natural languages may be learned by using genetic algorithms that reproduce and mutate grammatical rules and parts of speech tags, improving the quality of later generations of grammatical components. Syntactic rules are randomly generated and then evolve; those rules resulting in improved parsing and occasionally improved filtering performance are allowed to further propagate. The LUST system learns the characteristics of the language or subkanguage used in document abstracts by learning from the document rankings obtained from the parsed abstracts. Unlike the application of traditional linguistic rules to retrieval and filtering applications, LUST develops grammatical structures and tags without the prior imposition of some common grammatical assumptions (e.g. part of speech assumptions), producing grammars that are empirically based and are optimized for this particular application
    Source
    Information processing and management. 32(1996) no.2, S.185-197
  4. Losee, R.M.: Term dependence : truncating the Bahadur Lazarsfeld expansion (1994) 0.03
    0.02578884 = product of:
      0.077366516 = sum of:
        0.077366516 = weight(_text_:management in 7390) [ClassicSimilarity], result of:
          0.077366516 = score(doc=7390,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.44688427 = fieldWeight in 7390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.09375 = fieldNorm(doc=7390)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 30(1994) no.2, S.293-303
  5. Losee, R.M.: Browsing mixed structured and unstructured data (2006) 0.02
    0.018235464 = product of:
      0.05470639 = sum of:
        0.05470639 = weight(_text_:management in 173) [ClassicSimilarity], result of:
          0.05470639 = score(doc=173,freq=4.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.31599492 = fieldWeight in 173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=173)
      0.33333334 = coord(1/3)
    
    Abstract
    Both structured and unstructured data, as well as structured data representing several different types of tuples, may be integrated into a single list for browsing or retrieval. Data may be arranged in the Gray code order of the features and metadata, producing optimal ordering for browsing. We provide several metrics for evaluating the performance of systems supporting browsing, given some constraints. Metadata and indexing terms are used for sorting keys and attributes for structured data, as well as for semi-structured or unstructured documents, images, media, etc. Economic and information theoretic models are suggested that enable the ordering to adapt to user preferences. Different relational structures and unstructured data may be integrated into a single, optimal ordering for browsing or for displaying tables in digital libraries, database management systems, or information retrieval systems. Adaptive displays of data are discussed.
    Source
    Information processing and management. 42(2006) no.2, S.440-452
  6. Losee, R.M.: Text windows and phrases differing by discipline, location in document, and syntactic structure (1996) 0.02
    0.015043489 = product of:
      0.045130465 = sum of:
        0.045130465 = weight(_text_:management in 6962) [ClassicSimilarity], result of:
          0.045130465 = score(doc=6962,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.2606825 = fieldWeight in 6962, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6962)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 32(1996) no.6, S.747-767
  7. Haas, S.W.; Losee, R.M.: Looking in text windows : their size and composition (1994) 0.01
    0.01289442 = product of:
      0.038683258 = sum of:
        0.038683258 = weight(_text_:management in 8525) [ClassicSimilarity], result of:
          0.038683258 = score(doc=8525,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 8525, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=8525)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 30(1994) no.5, S.619-629
  8. Losee, R.M.: Browsing document collections : automatically organizing digital libraries and hypermedia using the Gray code (1997) 0.01
    0.01289442 = product of:
      0.038683258 = sum of:
        0.038683258 = weight(_text_:management in 146) [ClassicSimilarity], result of:
          0.038683258 = score(doc=146,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 146, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=146)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 33(1997) no.2, S.175-192
  9. Losee, R.M.: Decisions in thesaurus construction and use (2007) 0.01
    0.01289442 = product of:
      0.038683258 = sum of:
        0.038683258 = weight(_text_:management in 924) [ClassicSimilarity], result of:
          0.038683258 = score(doc=924,freq=2.0), product of:
            0.17312427 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.051362853 = queryNorm
            0.22344214 = fieldWeight in 924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=924)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 43(2007) no.4, S.958-968
  10. Losee, R.M.: ¬A Gray code based ordering for documents on shelves : classification for browsing and retrieval (1992) 0.01
    0.011375209 = product of:
      0.034125626 = sum of:
        0.034125626 = product of:
          0.06825125 = sum of:
            0.06825125 = weight(_text_:system in 2335) [ClassicSimilarity], result of:
              0.06825125 = score(doc=2335,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.42190298 = fieldWeight in 2335, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2335)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A document classifier places documents together in a linear arrangement for browsing or high-speed access by human or computerised information retrieval systems. Requirements for document classification and browsing systems are developed from similarity measures, distance measures, and the notion of subject aboutness. A requirement that documents be arranged in decreasing order of similarity as the distance from a given document increases can often not be met. Based on these requirements, information-theoretic considerations, and the Gray code, a classification system is proposed that can classifiy documents without human intervention. A measure of classifier performance is developed, and used to evaluate experimental results comparing the distance between subject headings assigned to documents given classifications from the proposed system and the Library of Congress Classification (LCC) system
  11. Losee, R.M.: ¬The relative shelf location of circulated books : a study of classification, users, and browsing (1993) 0.01
    0.011375209 = product of:
      0.034125626 = sum of:
        0.034125626 = product of:
          0.06825125 = sum of:
            0.06825125 = weight(_text_:system in 4485) [ClassicSimilarity], result of:
              0.06825125 = score(doc=4485,freq=6.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.42190298 = fieldWeight in 4485, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4485)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Patrons often browse through books organized by a library classification system, looking for books to use and possibly circulate. This research is an examination of the clustering of similar books provided by a classification system and ways in which the books that patrons circulate are clustered. Measures of classification system performance are suggested and used to evaluate two test collections. Regression formulas are derived describing the relationships among the number of areas in which books were found (the number of stops a patron makes when browsing), the distances across a cluster, and the average number of books a patron circulates. Patrons were found usually to make more stops than there were books found at their average stop. Consequences for full-text document systems and online catalogs are suggested
  12. Losee, R.M.: How to study classification systems and their appropriateness for individual institutions (1995) 0.01
    0.0075056907 = product of:
      0.022517072 = sum of:
        0.022517072 = product of:
          0.045034144 = sum of:
            0.045034144 = weight(_text_:system in 5545) [ClassicSimilarity], result of:
              0.045034144 = score(doc=5545,freq=2.0), product of:
                0.16177002 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.051362853 = queryNorm
                0.27838376 = fieldWeight in 5545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5545)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Answers to questions concerning individual library decisions to adopt classification systems are important in understanding the efffectiveness of libraries but are difficult to provide. Measures of classification system performance are discussed, as are different methodologies that may be used to seek answers, ranging from formal or philosophical models to quantitative experimental techniques and qualitative methods