Search (3 results, page 1 of 1)

  • × author_ss:"Boyce, B.R."
  1. Boyce, B.R.; McLain, J.P.: Entry point depth and online search using a controlled vocabulary (1989) 0.00
    0.0026473717 = product of:
      0.0052947435 = sum of:
        0.0052947435 = product of:
          0.010589487 = sum of:
            0.010589487 = weight(_text_:a in 2287) [ClassicSimilarity], result of:
              0.010589487 = score(doc=2287,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19940455 = fieldWeight in 2287, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The depth of indexing, the number of terms assigned on average to each document in a retrieval system as entry points, has a significantly effect on the standard retrieval performance measures in modern commercial retrieval systems, just as it did in previous experimental work. Tests on the effect of basic index search, as opposed to controlled vocabulary search, in these real systems are quite different than traditional comparisons of free text searching with controlled vocabulary searching. In modern commercial systems the controlled vocabulary serves as a precision device, since the strucure of the default for unqualified search terms in these systems requires that it do so.
    Type
    a
  2. Meadow, C.T.; Boyce, B.R.; Kraft, D.H.: Text information retrieval systems (2000) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 162) [ClassicSimilarity], result of:
              0.010148063 = score(doc=162,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 162, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=162)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information retrieval is a communication process that links the information user to a librarian, museum curator, fingerprint identification specialist, or whoever is in charge of a collection of what we are calling documents. The communication will normally involve the processing of text, strings of words known to both parties in the process, that can be used to describe a document's content and other attributes and link it with a need expressed in similar terms. This book's purpose is to teach people who will be searching or designing text retrieval systems how the systems work. For designers, it covers problems they will face and reviews currently available solutions to provide a basis for more advanced study. For the searcher its purpose is to describe why such systems work as they do. The book is primarily about computer-based retrieval systems, but the principles apply to nonmechanized ones as well. The book covers the nature of information, how it is organized for use by a computer, how search functions are carried out, and some of the theory underlying these functions. As well, it discusses the interaction between user and system and how retrieved items, users, and complete systems are evaluated. A limited knowledge of mathematics and of computing is assumed. The first edition of this work appeared just before the World Wide Web came on the scene, but was nonetheless a student favorite because of its clarity. The new edition is updated and expanded, covering not only the Web but also new developments in how IR systems are or could be designed.
  3. Boyce, B.R.; Douglas, J.S.; Rabalais, J.; Shiflett, L.; Wallace, D.P.: Measurement of subject scatter in the Superintendent of Documents Classification (1990) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 4907) [ClassicSimilarity], result of:
              0.008202582 = score(doc=4907,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 4907, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4907)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The hypothesis that the dispersion of documents, occuring when a collection id reclassified from the Superintendent of Documents Classification (SuDoc) to the Library of Congress Classification (LCC) is insignificant is not supported. This suggests that the SuDoc scheme is inappropriate for topical questions. The rank order statistics show no relationship between the documents orderings produced by the 2 schemes, and in no case did an examined SuDoc class contain less than 48% of the LCC main classes. LoC MARC records containing SuDOCs numbers for the Dept. of the Army, the Dept. of Agriculture, the Dept. of State, the Library of Congress, and a broad general sample were sorted by both schemes and the resulting orders compared
    Type
    a