Search (30 results, page 1 of 2)

  • × theme_ss:"Verbale Doksprachen im Online-Retrieval"
  1. Aluri, R.D.; Kemp, A.; Boll, J.J.: Subject analysis in online catalogs (1991) 0.06
    0.05719418 = product of:
      0.11438836 = sum of:
        0.07242205 = weight(_text_:data in 863) [ClassicSimilarity], result of:
          0.07242205 = score(doc=863,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 863, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=863)
        0.041966315 = product of:
          0.08393263 = sum of:
            0.08393263 = weight(_text_:processing in 863) [ClassicSimilarity], result of:
              0.08393263 = score(doc=863,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4427661 = fieldWeight in 863, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=863)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    LCSH
    Subject cataloguing / Data processing
    Machine / readable bibliographic data
    Subject
    Subject cataloguing / Data processing
    Machine / readable bibliographic data
  2. Milstead, J.L.: Thesauri in a full-text world (1998) 0.05
    0.049989868 = product of:
      0.099979736 = sum of:
        0.02586502 = weight(_text_:data in 2337) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2337,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.07411472 = sum of:
          0.042392377 = weight(_text_:processing in 2337) [ClassicSimilarity], result of:
            0.042392377 = score(doc=2337,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.22363065 = fieldWeight in 2337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2337)
          0.03172234 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
            0.03172234 = score(doc=2337,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.19345059 = fieldWeight in 2337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2337)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  3. Lambert, N.: Of thesauri and computers : reflections on the need for thesauri (1995) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 3734) [ClassicSimilarity], result of:
          0.04138403 = score(doc=3734,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 3734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3734)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 3734) [ClassicSimilarity], result of:
              0.050755743 = score(doc=3734,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 3734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3734)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Most indexed databases now include their thesauri and/or coding in their bibliographic files, searchable at the databases' online connect rates. Assesses the searchability of these on the different hosts. Thesauri and classifications are also available as diskette or CD-ROM products. Describes a number of these, highlighting the diskette thesaurus from IFI/Plenum Data for its flexible databases, the CLAIMS Uniterm and Comprehensive indexes to US chemical patents
    Source
    Searcher. 3(1995) no.8, S.18-22
  4. Drabenstott, K.M.; Vizine-Goetz, D.: Using subject headings for online retrieval : theory, practice and potential (1994) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 386) [ClassicSimilarity], result of:
          0.031038022 = score(doc=386,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=386)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 386) [ClassicSimilarity], result of:
              0.05087085 = score(doc=386,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=386)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Using subject headings for Online Retrieval is an indispensable tool for online system desingners who are developing new systems or refining exicting ones. The book describes subject analysis and subject searching in online catalogs, including the limitations of retrieval, and demonstrates how such limitations can be overcome through system design and programming. The book describes the Library of Congress Subject headings system and system characteristics, shows how information is stored in machine readable files, and offers examples of and recommendations for successful methods. Tables are included to support these recommendations, and diagrams, graphs, and bar charts are used to provide results of data analyses.
    Footnote
    Rez. in: Information processing and management 31(1995) no.3, S.450-451 (R.R. Larson); Library resources and technical services 41(1997) no.1, S.60-67 (B.H. Weinberg)
  5. Bates, M.J.: How to use controlled vocabularies more effectively in online searching (1989) 0.02
    0.02024258 = product of:
      0.08097032 = sum of:
        0.08097032 = weight(_text_:data in 2883) [ClassicSimilarity], result of:
          0.08097032 = score(doc=2883,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5468357 = fieldWeight in 2883, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2883)
      0.25 = coord(1/4)
    
    Abstract
    Optimal retrieval in on-line searching can be achieved through combined use of both natural language and controlled vocabularies. However, there is a large variety of types of controlled vocabulary in data bases and often more than one in a single data base. Optimal use of these vocabularies requires understanding what types of languages are involved, and taking advantage of the particular mix of vocabularies in a given data base. Examples 4 major types of indexing and classification used in data bases and puts these 4 in the context of 3 other approaches to subject access. Discusses how to evaluate a new data base for various forms of subject access.
  6. Bates, M.J.: How to use controlled vocabularies more effectively in online searching (1989) 0.02
    0.02024258 = product of:
      0.08097032 = sum of:
        0.08097032 = weight(_text_:data in 207) [ClassicSimilarity], result of:
          0.08097032 = score(doc=207,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5468357 = fieldWeight in 207, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=207)
      0.25 = coord(1/4)
    
    Abstract
    Optimal retrieval in on-line searching can be achieved through combined use of both natural language and controlled vocabularies. However, there is a large variety of types of controlled vocabulary in data bases and often more than one in a single data base. Optimal use of these vocabularies requires understanding what types of languages are involved, and taking advantage of the particular mix of vocabularies in a given data base. Examples 4 major types of indexing and classification used in data bases and puts these 4 in the context of 3 other approaches to subject access. Discusses how to evaluate a new data base for various forms of subject access.
  7. Schabas, A.H.: ¬A comparative evaluation of the retrieval effectiveness of titles, Library of Congress Subject Headings and PRECIS strings for computer searching of UK MARC data (1979) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 5277) [ClassicSimilarity], result of:
          0.062076043 = score(doc=5277,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 5277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=5277)
      0.25 = coord(1/4)
    
  8. Miller, U.; Teitelbaum, R.: Pre-coordination and post-coordination : past and future (2002) 0.01
    0.012849508 = product of:
      0.05139803 = sum of:
        0.05139803 = product of:
          0.10279606 = sum of:
            0.10279606 = weight(_text_:processing in 1395) [ClassicSimilarity], result of:
              0.10279606 = score(doc=1395,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.54227555 = fieldWeight in 1395, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1395)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article deals with the meaningful processing of information in relation to two systems of Information processing: pre-coordination and post-coordination. The different approaches are discussed, with emphasis an the need for a controlled vocabulary in information retrieval. Assigned indexing, which employs a controlled vocabulary, is described in detail. Types of indexing language can be divided into two broad groups - those using pre-coordinated terms and those depending an post-coordination. They represent two different basic approaches in processing and Information retrieval. The historical development of these two approaches is described, as well as the two tools that apply to these approaches: thesauri and subject headings.
  9. Blair, D.C.: Language and representation in information retrieval (1991) 0.01
    0.011567188 = product of:
      0.046268754 = sum of:
        0.046268754 = weight(_text_:data in 1545) [ClassicSimilarity], result of:
          0.046268754 = score(doc=1545,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.31247756 = fieldWeight in 1545, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1545)
      0.25 = coord(1/4)
    
    Abstract
    Information or Document Retrieval is the subject of this book. It is not an introductory book, although it is self-contained in the sense that it is not necessary to have a background in the theory or practice of Information Retrieval in order to understand its arguments. The book presents, as clearly as possible, one particular perspective on Information Retrieval, and attempts to say that certain aspects of the theory or practice of the management of documents are more important than others. The majority of Information Retrieval research has been aimed at the more experimentally tractable small-scale systems, and although much of that work has added greatly to our understanding of Information Retrieval it is becoming increasingly apparent that retrieval systems with large data bases of documents are a fundamentally different genre of systems than small-scale systems. If this is so, which is the thesis of this book, then we must now study large information retrieval systems with the same rigor and intensity that we once studied small-scale systems. Hegel observed that the quantitative growth of any system caused qualitative changes to take place in its structure and processes.
    Classification
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
    RVK
    ST 271 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme / Einzelne Datenbanksprachen und Datenbanksysteme
    ST 270 Informatik / Monographien / Software und -entwicklung / Datenbanken, Datenbanksysteme, Data base management, Informationssysteme
  10. Schabas, A.H.: Postcoordinate retrieval : a comparison of two retrieval languages (1982) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 1202) [ClassicSimilarity], result of:
          0.043894395 = score(doc=1202,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.25 = coord(1/4)
    
    Abstract
    This article reports on a comparison of the postcoordinate retrieval effectiveness of two indexing languages: LCSH and PRECIS. The effect of augmenting each with title words was also studies. The database for the study was over 15.000 UK MARC records. Users returned 5.326 relevant judgements for citations retrieved for 61 SDI profiles, representing a wide variety of subjects. Results are reported in terms of precision and relative recall. Pure/applied sciences data and social science data were analyzed separately. Cochran's significance tests for ratios were used to interpret the findings. Recall emerged as the more important measure discriminating the behavior of the two languages. Addition of title words was found to improve recall of both indexing languages significantly. A direct relationship was observed between recall and exhaustivity. For the social sciences searches, recalls from PRECIS alone and from PRECIS with title words were significantly higher than those from LCSH alone and from LCSH with title words, respectively. Corresponding comparisons for the pure/applied sciences searches revealed no significant differences
  11. O'Neill, E.T.; Chan, L.M.: FAST - a new approach to controlled subject access (2008) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 2181) [ClassicSimilarity], result of:
          0.043894395 = score(doc=2181,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 2181, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2181)
      0.25 = coord(1/4)
    
    Abstract
    Recent trends, driven to a large extent by the rapid proliferation of digital resources, are forcing changes in bibliographic control to make it easier to use, understand, and apply subject data. Subject headings are no exception. The enormous volume and rapid growth of digital libraries and repositories and the emergence of numerous metadata schemes have spurred a reexamination of the way subject data are to be provided for such resources efficiently and effectively. To address this need, OCLC in cooperation with the Library of Congress, has taken a new approach, called FAST (Faceted Application of Subject Terminology). FAST headings are based on the existing vocabulary in Library of Congress Subject Headings (LCSH), but are applied with a simpler syntax than required by Library of Congress application policies. Adapting the LCSH vocabulary in a simplified faceted syntax retains the rich vocabulary of LCSH while making it easier to understand, control, apply, and use.
  12. Seeman, D.; Chan, T.; Dykes, K.: Implementation and maintenance of FAST as linked data in a digital collections platform at University of Victoria Libraries (2023) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 1165) [ClassicSimilarity], result of:
          0.043894395 = score(doc=1165,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 1165, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1165)
      0.25 = coord(1/4)
    
    Abstract
    University of Victoria Libraries has implemented faceted vocabularies, particularly FAST, in its digital collections platform (Vault). The process involved migrating a variety of standardized (pre-coordinated Library of Congress subject headings) and non-standardized metadata to conform to a URI-centric metadata application profile. The authors argue that faceted vocabularies and FAST have helped to create a robust and intuitive user navigation in the platform and allowed for an efficient and straightforward metadata creation process. Maintaining FAST as linked data within Vault has required putting in place some technical processes to keep URIs and textual labels up to date and solutions (FAST Updater) have been locally developed.
  13. Svenonius, E.: Design of controlled vocabularies in the context of emerging technologies (1988) 0.01
    0.0103460075 = product of:
      0.04138403 = sum of:
        0.04138403 = weight(_text_:data in 762) [ClassicSimilarity], result of:
          0.04138403 = score(doc=762,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=762)
      0.25 = coord(1/4)
    
    Abstract
    Delineates on the changing role of vocabulary control devices such as classification, subject headings, and thesaurus. Identifies the basic issue in the design and development of these devices and their role in the changing information technology. The paper identifies the differentiations needed in the new roles of these devices in data base technology
  14. Bodoff, D.; Kambil, A.: Partial coordination : II. A preliminary evaluation and failure analysis (1998) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 2323) [ClassicSimilarity], result of:
          0.031038022 = score(doc=2323,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 2323, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2323)
      0.25 = coord(1/4)
    
    Abstract
    Partial coordination is a new method for cataloging documents for subject access. It is especially designed to enhance the precision of document searches in online environments. This article reports a preliminary evaluation of partial coordination that shows promising results compared with full-text retrieval. We also report the difficulties in empirically evaluating the effectiveness of automatic full-text retrieval in contrast to mixed methods such as partial coordination which combine human cataloging with computerized retrieval. Based on our study, we propose research in this area will substantially benefit from a common framework for failure analysis and a common data set. This will allow information retrieval researchers adapting 'library style'cataloging to large electronic document collections, as well as those developing automated or mixed methods, to directly compare their proposals for indexing and retrieval. This article concludes by suggesting guidelines for constructing such as testbed
  15. Poynder, R.: Web research engines? (1996) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 5698) [ClassicSimilarity], result of:
          0.031038022 = score(doc=5698,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 5698, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5698)
      0.25 = coord(1/4)
    
    Abstract
    Describes the shortcomings of search engines for the WWW comparing their current capabilities to those of the first generation CD-ROM products. Some allow phrase searching and most are improving their Boolean searching. Few allow truncation, wild cards or nested logic. They are stateless, losing previous search criteria. Unlike the indexing and classification systems for today's CD-ROMs, those for Web pages are random, unstructured and of variable quality. Considers that at best Web search engines can only offer free text searching. Discusses whether automatic data classification systems such as Infoseek Ultra can overcome the haphazard nature of the Web with neural network technology, and whether Boolean search techniques may be redundant when replaced by technology such as the Euroferret search engine. However, artificial intelligence is rarely successful on huge, varied databases. Relevance ranking and automatic query expansion still use the same simple inverted indexes. Most Web search engines do nothing more than word counting. Further complications arise with foreign languages
  16. Danskin, A.; Seeman, D.; Bouchard, M.; Kammerer, K.; Kilpatrick, L.; Mumbower, K.: FAST the inside track : where we are, where do we want to be, and how do we get there? (2023) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 1150) [ClassicSimilarity], result of:
          0.031038022 = score(doc=1150,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 1150, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1150)
      0.25 = coord(1/4)
    
    Abstract
    This is an overview of the development of FAST (Faceted Application of Subject Terminology) from its inception in the late 1990s, through its development and implementation to the work being undertaken by OCLC and the FAST Policy and Outreach Committee (FPOC) to develop and promote FAST. FPOC members explain how FAST is used by institutions in Canada, the United Kingdom, and the United States. They cover their experience of implementing FAST and the benefits they have derived. The final section considers the value of FAST as a faceted vocabulary and the potential for future development and linked data.
  17. Drabenstott, K.M.; Weller, M.S.: ¬The exact-display approach for online catalog subject searching (1996) 0.01
    0.007418666 = product of:
      0.029674664 = sum of:
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 6930) [ClassicSimilarity], result of:
              0.05934933 = score(doc=6930,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 6930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6930)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 32(1996) no.6, S.719-745
  18. Chen, H.; Yim, T.; Fye, D.: Automatic thesaurus generation for an electronic community system (1995) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 2918) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2918,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
      0.25 = coord(1/4)
    
    Abstract
    Reports an algorithmic approach to the automatic generation of thesauri for electronic community systems. The techniques used included terms filtering, automatic indexing, and cluster analysis. The testbed for the research was the Worm Community System, which contains a comprehensive library of specialized community data and literature, currently in use by molecular biologists who study the nematode worm. The resulting worm thesaurus included 2709 researchers' names, 798 gene names, 20 experimental methods, and 4302 subject descriptors. On average, each term had about 90 weighted neighbouring terms indicating relevant concepts. The thesaurus was developed as an online search aide. Tests the worm thesaurus in an experiment with 6 worm researchers of varying degrees of expertise and background. The experiment showed that the thesaurus was an excellent 'memory jogging' device and that it supported learning and serendipitous browsing. Despite some occurrences of obvious noise, the system was useful in suggesting relevant concepts for the researchers' queries and it helped improve concept recall. With a simple browsing interface, an automatic thesaurus can become a useful tool for online search and can assist researchers in exploring and traversing a dynamic and complex electronic community system
  19. Losee, R.M.: Improving collection browsing : small world networking and Gray code ordering (2017) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 5148) [ClassicSimilarity], result of:
          0.02586502 = score(doc=5148,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 5148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5148)
      0.25 = coord(1/4)
    
    Abstract
    Documents in digital and paper libraries may be arranged, based on their topics, in order to facilitate browsing. It may seem intuitively obvious that ordering documents by their subject should improve browsing performance; the results presented in this article suggest that ordering library materials by their Gray code values and through using links consistent with the small world model of document relationships is consistent with improving browsing performance. Below, library circulation data, including ordering with Library of Congress Classification numbers and Library of Congress Subject Headings, are used to provide information useful in generating user-centered document arrangements, as well as user-independent arrangements. Documents may be linearly arranged so they can be placed in a line by topic, such as on a library shelf, or in a list on a computer display. Crossover links, jumps between a document and another document to which it is not adjacent, can be used in library databases to allow additional paths that one might take when browsing. The improvement that is obtained with different combinations of document orderings and different crossovers is examined and applications suggested.
  20. Stone, A.T.: Up-ending Cutter's pyramid : the case for making subject references to broader terms (1996) 0.01
    0.0055514094 = product of:
      0.022205638 = sum of:
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 7238) [ClassicSimilarity], result of:
              0.044411276 = score(doc=7238,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 7238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7238)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 4.1997 20:43:23

Languages

  • e 27
  • d 3

Types

  • a 25
  • m 4
  • d 1
  • More… Less…

Classifications