Search (139 results, page 1 of 7)

  • × theme_ss:"Datenformate"
  1. UNIMARC manual : IFLA UBCIM Programme (1987) 0.04
    0.0407002 = product of:
      0.1628008 = sum of:
        0.024853004 = product of:
          0.04970601 = sum of:
            0.04970601 = weight(_text_:system in 62) [ClassicSimilarity], result of:
              0.04970601 = score(doc=62,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.49211764 = fieldWeight in 62, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.078125 = fieldNorm(doc=62)
          0.5 = coord(1/2)
        0.1379478 = product of:
          0.2758956 = sum of:
            0.2758956 = weight(_text_:manuals in 62) [ClassicSimilarity], result of:
              0.2758956 = score(doc=62,freq=4.0), product of:
                0.23796216 = queryWeight, product of:
                  7.4202213 = idf(docFreq=71, maxDocs=44218)
                  0.032069415 = queryNorm
                1.1594095 = fieldWeight in 62, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.4202213 = idf(docFreq=71, maxDocs=44218)
                  0.078125 = fieldNorm(doc=62)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    LCSH
    UNIMARC System
    PRECIS
    Documents / Cataloguing / Machine / readable files / International exchange / Formats: UNIMARC / Manuals
    Subject
    UNIMARC System
    Documents / Cataloguing / Machine / readable files / International exchange / Formats: UNIMARC / Manuals
  2. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.02
    0.020872014 = product of:
      0.083488055 = sum of:
        0.06828068 = product of:
          0.13656136 = sum of:
            0.13656136 = weight(_text_:manuals in 2848) [ClassicSimilarity], result of:
              0.13656136 = score(doc=2848,freq=2.0), product of:
                0.23796216 = queryWeight, product of:
                  7.4202213 = idf(docFreq=71, maxDocs=44218)
                  0.032069415 = queryNorm
                0.57387847 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.4202213 = idf(docFreq=71, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
        0.01520737 = product of:
          0.03041474 = sum of:
            0.03041474 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
              0.03041474 = score(doc=2848,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2708308 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  3. Coyle, K.: Future considerations : the functional library systems record (2004) 0.02
    0.02014326 = product of:
      0.08057304 = sum of:
        0.06319319 = sum of:
          0.028117962 = weight(_text_:system in 562) [ClassicSimilarity], result of:
            0.028117962 = score(doc=562,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.27838376 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0625 = fieldNorm(doc=562)
          0.03507523 = weight(_text_:29 in 562) [ClassicSimilarity], result of:
            0.03507523 = score(doc=562,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.31092256 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0625 = fieldNorm(doc=562)
        0.017379852 = product of:
          0.034759704 = sum of:
            0.034759704 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.034759704 = score(doc=562,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30952093 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The paper performs a thought experiment on the concept of a record based on the Functional Requirements for Bibliographic Records and library system functions, and concludes that if we want to develop a functional bibliographic record we need to do it within the context of a flexible, functional library systems record structure. The article suggests a new way to look at the library systems record that would allow libraries to move forward in terms of technology but also in terms of serving library users.
    Date
    9.12.2005 19:21:29
    Source
    Library hi tech. 22(2004) no.2, S.166-174
  4. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.02
    0.016798928 = product of:
      0.044797145 = sum of:
        0.021748468 = weight(_text_:retrieval in 1169) [ClassicSimilarity], result of:
          0.021748468 = score(doc=1169,freq=10.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.22419426 = fieldWeight in 1169, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.0074559003 = product of:
          0.014911801 = sum of:
            0.014911801 = weight(_text_:system in 1169) [ClassicSimilarity], result of:
              0.014911801 = score(doc=1169,freq=4.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.14763528 = fieldWeight in 1169, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
        0.015592776 = product of:
          0.031185552 = sum of:
            0.031185552 = weight(_text_:etc in 1169) [ClassicSimilarity], result of:
              0.031185552 = score(doc=1169,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.17953302 = fieldWeight in 1169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1169)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
    Issue
    Pt.1: Thesauri for information retrieval - Pt.2: Interoperability with other vocabularies.
  5. Shieh, J.: PCC's work on URIs in MARC (2020) 0.01
    0.014779588 = product of:
      0.059118353 = sum of:
        0.017537614 = product of:
          0.03507523 = sum of:
            0.03507523 = weight(_text_:29 in 122) [ClassicSimilarity], result of:
              0.03507523 = score(doc=122,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.31092256 = fieldWeight in 122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=122)
          0.5 = coord(1/2)
        0.041580737 = product of:
          0.08316147 = sum of:
            0.08316147 = weight(_text_:etc in 122) [ClassicSimilarity], result of:
              0.08316147 = score(doc=122,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.47875473 = fieldWeight in 122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.0625 = fieldNorm(doc=122)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In 2015, the PCC Task Group on URIs in MARC was tasked to identify and address linked data identifiers deployment in the current MARC format. By way of a pilot test, a survey, MARC Discussion papers, Proposals, etc., the Task Group initiated and introduced changes to MARC encoding. The Task Group succeeded in laying the ground work for preparing library data transition from MARC data to a linked data, RDF environment.
    Date
    2. 2.2021 18:29:15
  6. Kernernman, V.Y.; Koenig, M.E.D.: USMARC as a standardized format for the Internet hypermedia document control/retrieval/delivery system design (1996) 0.01
    0.014174518 = product of:
      0.056698073 = sum of:
        0.032094855 = weight(_text_:retrieval in 5565) [ClassicSimilarity], result of:
          0.032094855 = score(doc=5565,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33085006 = fieldWeight in 5565, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5565)
        0.024603218 = product of:
          0.049206436 = sum of:
            0.049206436 = weight(_text_:system in 5565) [ClassicSimilarity], result of:
              0.049206436 = score(doc=5565,freq=8.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.4871716 = fieldWeight in 5565, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5565)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Surveys how the USMARC integrated bibliographic format (UBIF) could be mapped onto an hypermedia document USMARC format (HDUF) to meet the requirements of a hypermedia document control/retrieval/delivery (HDRD) system for the Internet. Explores the characteristics of such a system using an example of the WWW's directory and searching engine Yahoo!. Discusses additional standard specifications for the UBIF's structure, content designation, and data content to map this format into the HDUF that can serve as a proxy for the Net HDRD system
  7. Leazer, G.H.: ¬A conceptual schema for the control of bibliographic works (1994) 0.01
    0.011909999 = product of:
      0.047639996 = sum of:
        0.032420702 = weight(_text_:retrieval in 3033) [ClassicSimilarity], result of:
          0.032420702 = score(doc=3033,freq=8.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.33420905 = fieldWeight in 3033, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3033)
        0.015219294 = product of:
          0.030438587 = sum of:
            0.030438587 = weight(_text_:system in 3033) [ClassicSimilarity], result of:
              0.030438587 = score(doc=3033,freq=6.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30135927 = fieldWeight in 3033, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3033)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    In this paper I describe a conceptual design of a bibliographic retrieval system that enables more thourough control of bibliographic entities. A bibliographic entity has 2 components: the intellectual work and the physical item. Users searching bibliographic retrieval systems generally do not search for a specific item, but are willing to retrieve one of several alternative manifestations of a work. However, contemporary bibliographic retrieval systems are based solely on the descriptions of items. Works are described only implcitly by collocating descriptions of items. This method has resulted in a tool that does not include important descriptive attributes of the work, e.g. information regarding its history, its genre, or its bibliographic relationships. A bibliographic relationship is an association between 2 bibliographic entities. A system evaluation methodology wasused to create a conceptual schema for a bibliographic retrieval system. The model is based upon an analysis of data elements in the USMARC Formats for Bibliographic Data. The conceptual schema describes a database comprising 2 separate files of bibliographic descriptions, one of works and the other of items. Each file consists of individual descriptive surrogates of their respective entities. the specific data content of each file is defined by a data dictionary. Data elements used in the description of bibliographic works reflect the nature of works as intellectual and linguistic objects. The descriptive elements of bibliographic items describe the physical properties of bibliographic entities. Bibliographic relationships constitute the logical strucutre of the database
  8. Paulus, W.; Weishaupt, K.: Bibliotheksdaten werden mehr wert : LibLink wertet bibliothekarische Dienstleistung auf (1996) 0.01
    0.010911709 = product of:
      0.043646835 = sum of:
        0.021922018 = product of:
          0.043844037 = sum of:
            0.043844037 = weight(_text_:29 in 5228) [ClassicSimilarity], result of:
              0.043844037 = score(doc=5228,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.38865322 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5228)
          0.5 = coord(1/2)
        0.021724815 = product of:
          0.04344963 = sum of:
            0.04344963 = weight(_text_:22 in 5228) [ClassicSimilarity], result of:
              0.04344963 = score(doc=5228,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.38690117 = fieldWeight in 5228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5228)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    29. 9.1996 18:58:22
  9. Bales, K.: ¬The USMARC formats and visual materials (1989) 0.01
    0.010829104 = product of:
      0.043316416 = sum of:
        0.025936563 = weight(_text_:retrieval in 2861) [ClassicSimilarity], result of:
          0.025936563 = score(doc=2861,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.26736724 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2861)
        0.017379852 = product of:
          0.034759704 = sum of:
            0.034759704 = weight(_text_:22 in 2861) [ClassicSimilarity], result of:
              0.034759704 = score(doc=2861,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30952093 = fieldWeight in 2861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Paper presented at a symposium on 'Implementing the Art and Architecture Thesaurus (AAT): Controlled Vocabulary in the Extended MARC format', held at the 1989 Annual Conference of the Art Libraries Society of North America. Describes how changes are effected in MARC and the role of the various groups in the library community that are involved in the implementing these changes. Discusses the expansion of the formats to accomodate cataloguing and retrieval for visual materials. Expanded capabilities for coding visual materials offer greater opportunity for user access.
    Date
    4.12.1995 22:40:20
  10. Oehlschläger, S.: Aus der 48. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 12. und 13. November 2004 in Göttingen (2005) 0.01
    0.0108186975 = product of:
      0.02884986 = sum of:
        0.011462449 = weight(_text_:retrieval in 3556) [ClassicSimilarity], result of:
          0.011462449 = score(doc=3556,freq=4.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.11816074 = fieldWeight in 3556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
        0.0043934314 = product of:
          0.008786863 = sum of:
            0.008786863 = weight(_text_:system in 3556) [ClassicSimilarity], result of:
              0.008786863 = score(doc=3556,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.08699492 = fieldWeight in 3556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3556)
          0.5 = coord(1/2)
        0.01299398 = product of:
          0.02598796 = sum of:
            0.02598796 = weight(_text_:etc in 3556) [ClassicSimilarity], result of:
              0.02598796 = score(doc=3556,freq=2.0), product of:
                0.17370372 = queryWeight, product of:
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.032069415 = queryNorm
                0.14961085 = fieldWeight in 3556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4164915 = idf(docFreq=533, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3556)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Content
    Die Deutsche Bibliothek Retrieval von Content In dem Projekt wird angestrebt, Verfahren zu entwickeln und einzuführen, die automatisch und ohne intellektuelle Bearbeitung für das Content-Retrieval ausreichend Sucheinstiege bieten. Dabei kann es sich um die Suche nach Inhalten von Volltexten, digitalen Abbildern, Audiofiles, Videofiles etc. von in Der Deutschen Bibliothek archivierten digitalen Ressourcen oder digitalen Surrogaten archivierter analoger Ressourcen (z. B. OCR-Ergebnisse) handeln. Inhalte, die in elektronischer Form vorhanden sind, aber dem InternetBenutzer Der Deutschen Bibliothek bisher nicht oder nur eingeschränkt zur Verfügung stehen, sollen in möglichst großem Umfang und mit möglichst großem Komfort nutzbar gemacht werden. Darüber hinaus sollen Inhalte benutzt werden, die für ein in ILTIS katalogisiertes Objekt beschreibenden Charakter haben, um auf das beschriebene Objekt zu verweisen. Die höchste Priorität liegt dabei auf der Erschließung von Inhalten in Textformaten. In einem ersten Schritt wurde der Volltext aller Zeitschriften, die im Projekt "Exilpresse digital" digitalisiert wurden, für eine erweiterte Suche genutzt. In einem nächsten Schritt soll die PSI-Software für die Volltextindexierung von Abstracts evaluiert werden. MILOS Mit dem Einsatz von MILOS eröffnet sich die Möglichkeit, nicht oder wenig sachlich erschlossene Bestände automatisch mit ergänzenden Inhaltserschließungsinformationen zu versehen, der Schwerpunkt liegt dabei auf der Freitext-Indexierung. Das bereits in einigen Bibliotheken eingesetzte System, das inzwischen von Der Deutschen Bibliothek für Deutschland lizenziert wurde, wurde in eine UNIX-Version überführt und angepasst. Inzwischen wurde nahezu der gesamte Bestand rückwirkend behandelt, die Daten werden im Gesamt-OPAC für die Recherche zur Verfügung stehen. Die in einer XMLStruktur abgelegten Indexeinträge werden dabei vollständig indexiert und zugänglich gemacht. Ein weiterer Entwicklungsschritt wird in dem Einsatz von MILOS im Online-Verfahren liegen.
  11. Guenther, R.S.: ¬The USMARC Format for Classification Data : development and implementation (1992) 0.01
    0.009998886 = product of:
      0.039995544 = sum of:
        0.025936563 = weight(_text_:retrieval in 2996) [ClassicSimilarity], result of:
          0.025936563 = score(doc=2996,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.26736724 = fieldWeight in 2996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2996)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 2996) [ClassicSimilarity], result of:
              0.028117962 = score(doc=2996,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 2996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2996)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper discusses the newly developed USMARC Format for Classification Data. It reviews its potential uses within an online system and its development as one of the USMARC standards for representing bibliographic and related information in machine-readable form. It provides a summary of the fields in the format, and considers the prospects for its implementation.
    Theme
    Klassifikationssysteme im Online-Retrieval
  12. Guenther, R.S.: ¬The development and implementation of the USMARC format for classification data (1992) 0.01
    0.009998886 = product of:
      0.039995544 = sum of:
        0.025936563 = weight(_text_:retrieval in 8865) [ClassicSimilarity], result of:
          0.025936563 = score(doc=8865,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.26736724 = fieldWeight in 8865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=8865)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 8865) [ClassicSimilarity], result of:
              0.028117962 = score(doc=8865,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 8865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8865)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    This paper discusses the newly developed USMARC Format for Classification Data. It reviews its potential uses within an online system and its development as one of the USMARC standards. It provides a summary of the fields in the format and considers the prospects for its implementation. The papaer describes an experiment currently being conducted at the Library of Congress to create USMARC classification records and use a classification database in classifying materials in the social sciences
    Theme
    Klassifikationssysteme im Online-Retrieval
  13. Mueller, C.J.; Whittaker, M.A.: What is this thing called MARC(S)? (1990) 0.01
    0.009998886 = product of:
      0.039995544 = sum of:
        0.025936563 = weight(_text_:retrieval in 3588) [ClassicSimilarity], result of:
          0.025936563 = score(doc=3588,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.26736724 = fieldWeight in 3588, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3588)
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 3588) [ClassicSimilarity], result of:
              0.028117962 = score(doc=3588,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 3588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3588)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Contribution to an issue devoted to serials and reference services. Familiarity with the basic elements of the MARC format and their effect on the display and retrieval of bibliographic data is an essential element of public service in those libraries with MARC-based on-line catalogues. Describes the components of a MARC record. To successfully retrieve the information sought from an on-line catalogue, the catalogue user must know whether it is in an indexed field and, if so, must be familiar with the search strategies required by the system.
  14. Guenther, R.S.: ¬The Library of Congress Classification in the USMARC format (1994) 0.01
    0.008749025 = product of:
      0.0349961 = sum of:
        0.02269449 = weight(_text_:retrieval in 8864) [ClassicSimilarity], result of:
          0.02269449 = score(doc=8864,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.23394634 = fieldWeight in 8864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8864)
        0.012301609 = product of:
          0.024603218 = sum of:
            0.024603218 = weight(_text_:system in 8864) [ClassicSimilarity], result of:
              0.024603218 = score(doc=8864,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2435858 = fieldWeight in 8864, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=8864)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The paper reviews the development of the USMARC Format for Classification Data, a standard for communication of classification data in machine-readable form. It considers the uses for online classification schedules, both for technical services and reference functions and gives an overview of the format specification details of data elements used and of the structure of the records. The paper describes an experiment conducted at the Library of Congress to test the format as well as the development of the classification database encompassing the LCC schedules. Features of the classification system are given. The LoC will complete its conversion of the LCC in mid-1995
    Theme
    Klassifikationssysteme im Online-Retrieval
  15. Guenther, R.S.: Automating the Library of Congress Classification Scheme : implementation of the USMARC format for classification data (1996) 0.01
    0.008749025 = product of:
      0.0349961 = sum of:
        0.02269449 = weight(_text_:retrieval in 5578) [ClassicSimilarity], result of:
          0.02269449 = score(doc=5578,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.23394634 = fieldWeight in 5578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5578)
        0.012301609 = product of:
          0.024603218 = sum of:
            0.024603218 = weight(_text_:system in 5578) [ClassicSimilarity], result of:
              0.024603218 = score(doc=5578,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.2435858 = fieldWeight in 5578, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Potential uses for classification data in machine readable form and reasons for the development of a standard, the USMARC Format for Classification Data, which allows for classification data to interact with other USMARC bibliographic and authority data are discussed. The development, structure, content, and use of the standard is reviewed with implementation decisions for the Library of Congress Classification scheme noted. The author examines the implementation of USMARC classification at LC, the conversion of the schedules, and the functionality of the software being used. Problems in the effort are explored, and enhancements desired for the online classification system are considered.
    Theme
    Klassifikationssysteme im Online-Retrieval
  16. Carini, P.; Shepherd, K.: ¬The MARC standard and encoded archival description (2004) 0.01
    0.008729367 = product of:
      0.034917466 = sum of:
        0.017537614 = product of:
          0.03507523 = sum of:
            0.03507523 = weight(_text_:29 in 2830) [ClassicSimilarity], result of:
              0.03507523 = score(doc=2830,freq=2.0), product of:
                0.11281017 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.032069415 = queryNorm
                0.31092256 = fieldWeight in 2830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2830)
          0.5 = coord(1/2)
        0.017379852 = product of:
          0.034759704 = sum of:
            0.034759704 = weight(_text_:22 in 2830) [ClassicSimilarity], result of:
              0.034759704 = score(doc=2830,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30952093 = fieldWeight in 2830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2830)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    9.12.2005 19:29:32
    Source
    Library hi tech. 22(2004) no.1, S.18-27
  17. Mishra, K.S.: Bibliographic databases and exchange formats (1997) 0.01
    0.007859709 = product of:
      0.031438835 = sum of:
        0.014058981 = product of:
          0.028117962 = sum of:
            0.028117962 = weight(_text_:system in 1757) [ClassicSimilarity], result of:
              0.028117962 = score(doc=1757,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.27838376 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
        0.017379852 = product of:
          0.034759704 = sum of:
            0.034759704 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.034759704 = score(doc=1757,freq=2.0), product of:
                0.112301625 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.032069415 = queryNorm
                0.30952093 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Computers play an important role in the development of bibliographic databases. Exchange formats are needed for the generation and exchange of bibliographic data at different levels: international, national, regional and local. Discusses the formats available at national and international level such as the International Standard Exchange Format (ISO 2709); the various MARC formats and the Common Communication Format (CCF). Work on Indian standards involving the Bureau of Indian Standards, the National Information System for Science and Technology (NISSAT) and other institutions proceeds only slowly
    Source
    DESIDOC bulletin of information technology. 17(1997) no.5, S.17-22
  18. Taylor, M.; Dickmeiss, A.: Delivering MARC/XML records from the Library of Congress catalogue using the open protocols SRW/U and Z39.50 (2005) 0.01
    0.007499164 = product of:
      0.029996656 = sum of:
        0.019452421 = weight(_text_:retrieval in 4350) [ClassicSimilarity], result of:
          0.019452421 = score(doc=4350,freq=2.0), product of:
            0.09700725 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.032069415 = queryNorm
            0.20052543 = fieldWeight in 4350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
        0.010544236 = product of:
          0.021088472 = sum of:
            0.021088472 = weight(_text_:system in 4350) [ClassicSimilarity], result of:
              0.021088472 = score(doc=4350,freq=2.0), product of:
                0.10100432 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.032069415 = queryNorm
                0.20878783 = fieldWeight in 4350, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4350)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The MARC standard for representing catalogue records and the Z39.50 standard for locating and retrieving them have facilitated interoperability in the library domain for more than a decade. With the increasing ubiquity of XML, these standards are being superseded by MARCXML and MarcXchange for record representation and SRW/U for searching and retrieval. Service providers moving from the older standards to the newer generally need to support both old and new forms during the transition period. YAZ Proxy uses a novel approach to provide SRW/MARCXML access to the Library of Congress catalogue, by translating requests into Z39.50 and querying the older system directly. As a fringe benefit, it also greatly accelerates Z39.50 access.
  19. Passin-Aguirre, N.; Leresche, F.: ¬Le format INTERMARC integre : futur format de travail de la BNF (1997) 0.01
    0.006911755 = product of:
      0.05529404 = sum of:
        0.05529404 = sum of:
          0.024603218 = weight(_text_:system in 915) [ClassicSimilarity], result of:
            0.024603218 = score(doc=915,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.2435858 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0546875 = fieldNorm(doc=915)
          0.030690823 = weight(_text_:29 in 915) [ClassicSimilarity], result of:
            0.030690823 = score(doc=915,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.27205724 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=915)
      0.125 = coord(1/8)
    
    Abstract
    The French National Library (NBF) has developed 2 new versions of INTERMARC, (A) and (B), to standardise cataloguing procedures and enrich bibliographic description and access. The bibliographic description format (B) accords with existing ISBD and can be used for all types of documents, allowing inclusion of specific characteristics and addition of new links. The format for editing records (A) eliminates redundancies and enriches links between fields. Both will be used as reference formats in the new Information System
    Date
    29. 1.1996 16:50:24
  20. Provansal, A.: Neuf mois après (1997) 0.01
    0.006911755 = product of:
      0.05529404 = sum of:
        0.05529404 = sum of:
          0.024603218 = weight(_text_:system in 917) [ClassicSimilarity], result of:
            0.024603218 = score(doc=917,freq=2.0), product of:
              0.10100432 = queryWeight, product of:
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.032069415 = queryNorm
              0.2435858 = fieldWeight in 917, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1495528 = idf(docFreq=5152, maxDocs=44218)
                0.0546875 = fieldNorm(doc=917)
          0.030690823 = weight(_text_:29 in 917) [ClassicSimilarity], result of:
            0.030690823 = score(doc=917,freq=2.0), product of:
              0.11281017 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.032069415 = queryNorm
              0.27205724 = fieldWeight in 917, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=917)
      0.125 = coord(1/8)
    
    Abstract
    Electronic documents are creating new services and generating new demands, with consequent impacts on the means of transmitting knowledge, international standards and democratisation of access. Universal bibliographic control depends on common rules for bibliographic description and format to ensure compatibility and exchange. In addition to ISBN and UNIMARC for cataloguing, Z39.50 allows searching of heterogeneous databases and SGML makes cataloguing in publication a reality. Such developments must be based on knowledge of what users want and their real search and consultation practices, not what the system devisers have the technology to create
    Date
    29. 1.1996 16:50:24

Authors

Years

Languages

  • e 90
  • d 32
  • f 9
  • pl 1
  • sp 1
  • More… Less…

Types

  • a 117
  • s 10
  • m 9
  • el 5
  • b 2
  • l 2
  • n 2
  • x 2
  • More… Less…