Search (72 results, page 1 of 4)

  • × theme_ss:"Klassifikationssysteme im Online-Retrieval"
  1. Dack, D.: Australian attends conference on Dewey (1989) 0.08
    0.08227747 = product of:
      0.16455494 = sum of:
        0.036211025 = weight(_text_:data in 2509) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2509,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2509)
        0.12834391 = sum of:
          0.08393263 = weight(_text_:processing in 2509) [ClassicSimilarity], result of:
            0.08393263 = score(doc=2509,freq=4.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.4427661 = fieldWeight in 2509, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2509)
          0.044411276 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
            0.044411276 = score(doc=2509,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2509)
      0.5 = coord(2/4)
    
    Abstract
    Edited version of a report to the Australian Library and Information Association on the Conference on classification theory in the computer age, Albany, New York, 18-19 Nov 88, and on the meeting of the Dewey Editorial Policy Committee which preceded it. The focus of the Editorial Policy Committee Meeting lay in the following areas: browsing; potential for improved subject access; system design; potential conflict between shelf location and information retrieval; and users. At the Conference on classification theory in the computer age the following papers were presented: Applications of artificial intelligence to bibliographic classification, by Irene Travis; Automation and classification, By Elaine Svenonious; Subject classification and language processing for retrieval in large data bases, by Diana Scott; Implications for information processing, by Carol Mandel; and implications for information science education, by Richard Halsey.
    Date
    8.11.1995 11:52:22
  2. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1998) 0.08
    0.07748537 = product of:
      0.15497074 = sum of:
        0.051210128 = weight(_text_:data in 2342) [ClassicSimilarity], result of:
          0.051210128 = score(doc=2342,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34584928 = fieldWeight in 2342, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2342)
        0.1037606 = sum of:
          0.05934933 = weight(_text_:processing in 2342) [ClassicSimilarity], result of:
            0.05934933 = score(doc=2342,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.3130829 = fieldWeight in 2342, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2342)
          0.044411276 = weight(_text_:22 in 2342) [ClassicSimilarity], result of:
            0.044411276 = score(doc=2342,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 2342, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2342)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  3. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.07
    0.069985814 = product of:
      0.13997163 = sum of:
        0.036211025 = weight(_text_:data in 57) [ClassicSimilarity], result of:
          0.036211025 = score(doc=57,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 57, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.1037606 = sum of:
          0.05934933 = weight(_text_:processing in 57) [ClassicSimilarity], result of:
            0.05934933 = score(doc=57,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.3130829 = fieldWeight in 57, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=57)
          0.044411276 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
            0.044411276 = score(doc=57,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 57, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=57)
      0.5 = coord(2/4)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
  4. Aluri, R.D.; Kemp, A.; Boll, J.J.: Subject analysis in online catalogs (1991) 0.06
    0.05719418 = product of:
      0.11438836 = sum of:
        0.07242205 = weight(_text_:data in 863) [ClassicSimilarity], result of:
          0.07242205 = score(doc=863,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 863, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=863)
        0.041966315 = product of:
          0.08393263 = sum of:
            0.08393263 = weight(_text_:processing in 863) [ClassicSimilarity], result of:
              0.08393263 = score(doc=863,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4427661 = fieldWeight in 863, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=863)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    LCSH
    Subject cataloguing / Data processing
    Machine / readable bibliographic data
    Subject
    Subject cataloguing / Data processing
    Machine / readable bibliographic data
  5. Neelameghan, A.: S.R. Ranganathan's general theory of knowledge classification in designing, indexing and retrieving from specialised databases (1997) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 3) [ClassicSimilarity], result of:
          0.031038022 = score(doc=3,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 3, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 3) [ClassicSimilarity], result of:
              0.05087085 = score(doc=3,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 3, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Summarizes some experiences of the application of the priciples and postulates of S.R. Ranganathan's General Theory of Knowledge Classification, incorporating the freely faceted approach and analytico synthetic methods, to the design and development of specialized databases, including indexing, user interfaces and retrieval. Enumerates some of the earlier instances of the facet method in machine based systems, beginning with Hollerith's punched card system for the data processing of the US Census. Elaborates on Ranganathan's holistic approach to information systems and services provided by his normative principles. Notes similarities between the design of databases and faceted classification systems. Examples from working systems are given to demonstrate the usefulness of selected canons and principles of classification and the analytico synthetic methodology to database design. The examples are mostly operational database systems developed using Unesco's Micro CDS-ISIS software
  6. Chandler, A.; LeBlanc, J.: Exploring the potential of a virtual undergraduate library collection based on the hierarchical interface to LC Classification (2006) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 769) [ClassicSimilarity], result of:
          0.031038022 = score(doc=769,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=769)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 769) [ClassicSimilarity], result of:
              0.038066804 = score(doc=769,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 769, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=769)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Hierarchical Interface to Library of Congress Classification (HILCC) is a system developed by the Columbia University Library to leverage call number data from the MARC holdings records in Columbia's online catalog to create a structured, hierarchical menuing system that provides subject access to the library's electronic resources. In this paper, the authors describe a research initiative at the Cornell University Library to discover if the Columbia HILCC scheme can be used as developed or in modified form to create a virtual undergraduate print collection outside the context of the traditional online catalog. Their results indicate that, with certain adjustments, an HILCC model can indeed, be used to represent the holdings of a large research library's undergraduate collection of approximately 150,000 titles, but that such a model is not infinitely scalable and may require a new approach to browsing such a large information space.
    Date
    10. 9.2000 17:38:22
  7. Slavic, A.: On the nature and typology of documentary classifications and their use in a networked environment (2007) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 780) [ClassicSimilarity], result of:
          0.031038022 = score(doc=780,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=780)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 780) [ClassicSimilarity], result of:
              0.038066804 = score(doc=780,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=780)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Networked orientated standards for vocabulary publishing and exchange and proposals for terminological services and terminology registries will improve sharing and use of all knowledge organization systems in the networked information environment. This means that documentary classifications may also become more applicable for use outside their original domain of application. The paper summarises some characteristics common to documentary classifications and explains some terminological, functional and implementation aspects. The original purpose behind each classification scheme determines the functions that the vocabulary is designed to facilitate. These functions influence the structure, semantics and syntax, scheme coverage and format in which classification data are published and made available. The author suggests that attention should be paid to the differences between documentary classifications as these may determine their suitability for a certain purpose and may impose different requirements with respect to their use online. As we speak, many classifications are being created for knowledge organization and it may be important to promote expertise from the bibliographic domain with respect to building and using classification systems.
    Date
    22.12.2007 17:22:31
  8. Frâncu, V.; Sabo, C.-N.: Implementation of a UDC-based multilingual thesaurus in a library catalogue : the case of BiblioPhil (2010) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 3697) [ClassicSimilarity], result of:
          0.031038022 = score(doc=3697,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 3697, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3697)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 3697) [ClassicSimilarity], result of:
              0.038066804 = score(doc=3697,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 3697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3697)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In order to enhance the use of Universal Decimal Classification (UDC) numbers in information retrieval, the authors have represented classification with multilingual thesaurus descriptors and implemented this solution in an automated way. The authors illustrate a solution implemented in a BiblioPhil library system. The standard formats used are UNIMARC for subject authority records (i.e. the UDC-based multilingual thesaurus) and MARC XML support for data transfer. The multilingual thesaurus was built according to existing standards, the constituent parts of the classification notations being used as the basis for search terms in the multilingual information retrieval. The verbal equivalents, descriptors and non-descriptors, are used to expand the number of concepts and are given in Romanian, English and French. This approach saves the time of the indexer and provides more user-friendly and easier access to the bibliographic information. The multilingual aspect of the thesaurus enhances information access for a greater number of online users
    Date
    22. 7.2010 20:40:56
  9. Guenther, R.S.: Automating the Library of Congress Classification Scheme : implementation of the USMARC format for classification data (1996) 0.02
    0.022174634 = product of:
      0.088698536 = sum of:
        0.088698536 = weight(_text_:data in 5578) [ClassicSimilarity], result of:
          0.088698536 = score(doc=5578,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.59902847 = fieldWeight in 5578, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5578)
      0.25 = coord(1/4)
    
    Abstract
    Potential uses for classification data in machine readable form and reasons for the development of a standard, the USMARC Format for Classification Data, which allows for classification data to interact with other USMARC bibliographic and authority data are discussed. The development, structure, content, and use of the standard is reviewed with implementation decisions for the Library of Congress Classification scheme noted. The author examines the implementation of USMARC classification at LC, the conversion of the schedules, and the functionality of the software being used. Problems in the effort are explored, and enhancements desired for the online classification system are considered.
    Object
    USMARC for classification data
  10. Woods, E.W.; IFLA Section on classification and Indexing and Indexing and Information Technology; Joint Working Group on a Classification Format: Requirements for a format of classification data : Final report, July 1996 (1996) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 3008) [ClassicSimilarity], result of:
          0.08778879 = score(doc=3008,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 3008, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
      0.25 = coord(1/4)
    
    Object
    USMARC for classification data
  11. Concise UNIMARC Classification Format : Draft 5 (20000125) (2000) 0.02
    0.020692015 = product of:
      0.08276806 = sum of:
        0.08276806 = weight(_text_:data in 4421) [ClassicSimilarity], result of:
          0.08276806 = score(doc=4421,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5589768 = fieldWeight in 4421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.125 = fieldNorm(doc=4421)
      0.25 = coord(1/4)
    
    Object
    UNIMARC for classification data
  12. Guenther, R.S.: ¬The Library of Congress Classification in the USMARC format (1994) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 8864) [ClassicSimilarity], result of:
          0.07242205 = score(doc=8864,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 8864, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8864)
      0.25 = coord(1/4)
    
    Abstract
    The paper reviews the development of the USMARC Format for Classification Data, a standard for communication of classification data in machine-readable form. It considers the uses for online classification schedules, both for technical services and reference functions and gives an overview of the format specification details of data elements used and of the structure of the records. The paper describes an experiment conducted at the Library of Congress to test the format as well as the development of the classification database encompassing the LCC schedules. Features of the classification system are given. The LoC will complete its conversion of the LCC in mid-1995
    Object
    USMARC for classification data
  13. Guenther, R.S.: ¬The USMARC Format for Classification Data : development and implementation (1992) 0.02
    0.017919812 = product of:
      0.07167925 = sum of:
        0.07167925 = weight(_text_:data in 2996) [ClassicSimilarity], result of:
          0.07167925 = score(doc=2996,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48408815 = fieldWeight in 2996, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2996)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the newly developed USMARC Format for Classification Data. It reviews its potential uses within an online system and its development as one of the USMARC standards for representing bibliographic and related information in machine-readable form. It provides a summary of the fields in the format, and considers the prospects for its implementation.
    Object
    USMARC for classification data
  14. Guenther, R.S.: ¬The development and implementation of the USMARC format for classification data (1992) 0.02
    0.017919812 = product of:
      0.07167925 = sum of:
        0.07167925 = weight(_text_:data in 8865) [ClassicSimilarity], result of:
          0.07167925 = score(doc=8865,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48408815 = fieldWeight in 8865, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=8865)
      0.25 = coord(1/4)
    
    Abstract
    This paper discusses the newly developed USMARC Format for Classification Data. It reviews its potential uses within an online system and its development as one of the USMARC standards. It provides a summary of the fields in the format and considers the prospects for its implementation. The papaer describes an experiment currently being conducted at the Library of Congress to create USMARC classification records and use a classification database in classifying materials in the social sciences
    Object
    USMARC for classification data
  15. Quick Guide to Publishing a Classification Scheme on the Semantic Web (2008) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 3061) [ClassicSimilarity], result of:
          0.06271934 = score(doc=3061,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 3061, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3061)
      0.25 = coord(1/4)
    
    Abstract
    This document describes in brief how to express the content and structure of a classification scheme, and metadata about a classification scheme, in RDF using the SKOS vocabulary. RDF allows data to be linked to and/or merged with other RDF data by semantic web applications. The Semantic Web, which is based on the Resource Description Framework (RDF), provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Publishing classifications schemes in SKOS will unify the great many of existing classification efforts in the framework of the Semantic Web.
  16. Vizine-Goetz, D.: OCLC investigates using classification tools to organize Internet data (1997) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 3410) [ClassicSimilarity], result of:
          0.062076043 = score(doc=3410,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 3410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=3410)
      0.25 = coord(1/4)
    
  17. Slavic, A.; Cordeiro, M.I.: Core requirements for automation of analytico-synthetic classifications (2004) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 2651) [ClassicSimilarity], result of:
          0.062076043 = score(doc=2651,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 2651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2651)
      0.25 = coord(1/4)
    
    Abstract
    The paper analyses the importance of data presentation and modelling and its role in improving the management, use and exchange of analytico-synthetic classifications in automated systems. Inefficiencies, in this respect, hinder the automation of classification systems that offer the possibility of building compound index/search terms. The lack of machine readable data expressing the semantics and structure of a classification vocabulary has negative effects on information management and retrieval, thus restricting the potential of both automated systems and classifications themselves. The authors analysed the data representation structure of three general analytico-synthetic classification systems (BC2-Bliss Bibliographic Classification; BSO-Broad System of Ordering; UDC-Universal Decimal Classification) and put forward some core requirements for classification data representation
  18. Järvelin, K.; Niemi, T.: Deductive information retrieval based on classifications (1993) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 2229) [ClassicSimilarity], result of:
          0.053759433 = score(doc=2229,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 2229, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2229)
      0.25 = coord(1/4)
    
    Abstract
    Modern fact databses contain abundant data classified through several classifications. Typically, users msut consult these classifications in separate manuals or files, thus making their effective use difficult. Contemporary database systems do little support deductive use of classifications. In this study we show how deductive data management techniques can be applied to the utilization of data value classifications. Computation of transitive class relationships is of primary importance here. We define a representation of classifications which supports transitive computation and present an operation-oriented deductive query language tailored for classification-based deductive information retrieval. The operations of this language are on the same abstraction level as relational algebra operations and can be integrated with these to form a powerful and flexible query language for deductive information retrieval. We define the integration of these operations and demonstrate the usefulness of the language in terms of several sample queries
  19. Hanke, M.: Bibliothekarische Klassifikationssysteme im semantischen Web : zu Chancen und Problemen von Linked-data-Repräsentationen ausgewählter Klassifikationssysteme (2014) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 2463) [ClassicSimilarity], result of:
          0.053759433 = score(doc=2463,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 2463, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2463)
      0.25 = coord(1/4)
    
    Abstract
    Pflege und Anwendung von Klassifikationssystemen für Informationsressourcen sind traditionell eine Kernkompetenz von Bibliotheken. Diese Systeme sind häufig historisch gewachsen und die Veröffentlichung verschiedener Systeme ist in der Vergangenheit typischerweise durch gedruckte Regelwerke oder proprietäre Datenbanken erfolgt. Die Technologien des semantischen Web erlauben es, Klassifikationssysteme in einer standardisierten und maschinenlesbaren Weise zu repräsentieren, sowie als Linked (Open) Data für die Nachnutzung zugänglich zu machen. Anhand ausgewählter Beispiele von Klassifikationssystemen, die bereits als Linked (Open) Data publiziert wurden, werden in diesem Artikel zentrale semantische und technische Fragen erörtert, sowie mögliche Einsatzgebiete und Chancen dargestellt. So kann beispielsweise die für die Maschinenlesbarkeit erforderliche starke Strukturierung von Daten im semantischen Web zum besseren Verständnis der Klassifikationssysteme beitragen und möglicherweise positive Impulse für ihre Weiterentwicklung liefern. Für das semantische Web aufbereitete Repräsentationen von Klassifikationssystemen können unter anderem zur Kataloganreicherung oder für die anwendungsbezogene Erstellung von Konkordanzen zwischen verschiedenen Klassifikations- bzw. Begriffssystemen genutzt werden..
  20. McGarry, D.: Displays of bibliographic records in call number order : functions of the displays and data elements needed (1992) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 2384) [ClassicSimilarity], result of:
          0.05173004 = score(doc=2384,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 2384, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2384)
      0.25 = coord(1/4)
    
    Abstract
    Online displays of bibliographic records in call number order can serve various functions. A literature search showed no papers or books discussing this topic directly. Various displays from online catalogues available via the Internet were examined, as were displays sent to the author by colleagues. A number of the displays were uninformative to the extent that the identification of works associated with call numbers was difficult or impossible without follow-up searching of the individual bibliographic records. Other displays provided information where further searching of the database would not be required for most purposes. Displays noted ranged from displays with call numbers alone, with no bibliographic information, to records including main entry, title, statement of responsibility, place, publisher, and date. Suggestions of useful data elements to be included in displays of bibliographic records in call number order are made for the following functions: shelflisting, cataloguing, catalogue maintenance, reference, public searches, acquisition and collection development, and inventory control. Recommendations are made that the following data elements should be present in call number displays: entire call number as a sequencing element; main entry; entire title proper, and the date. Concern is expressed that the call number filing arrangement be that followed in traditional shelflists, and a suggestion is made that possible consensus on the placement of the data elements within a display be considered in the future

Years

Languages

  • e 61
  • d 8
  • ja 1
  • nl 1
  • More… Less…

Types

  • a 59
  • el 8
  • m 4
  • s 3
  • x 1
  • More… Less…