Search (162 results, page 1 of 9)

  • × theme_ss:"Datenformate"
  1. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.07
    0.069985814 = product of:
      0.13997163 = sum of:
        0.036211025 = weight(_text_:data in 2848) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2848,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.1037606 = sum of:
          0.05934933 = weight(_text_:processing in 2848) [ClassicSimilarity], result of:
            0.05934933 = score(doc=2848,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.3130829 = fieldWeight in 2848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2848)
          0.044411276 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
            0.044411276 = score(doc=2848,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 2848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2848)
      0.5 = coord(2/4)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  2. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.06
    0.05998784 = product of:
      0.11997568 = sum of:
        0.031038022 = weight(_text_:data in 4748) [ClassicSimilarity], result of:
          0.031038022 = score(doc=4748,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.088937655 = sum of:
          0.05087085 = weight(_text_:processing in 4748) [ClassicSimilarity], result of:
            0.05087085 = score(doc=4748,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.26835677 = fieldWeight in 4748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046875 = fieldNorm(doc=4748)
          0.038066804 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
            0.038066804 = score(doc=4748,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.23214069 = fieldWeight in 4748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4748)
      0.5 = coord(2/4)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  3. Lee, S.; Jacob, E.K.: ¬An integrated approach to metadata interoperability : construction of a conceptual structure between MARC and FRBR (2011) 0.04
    0.044218265 = product of:
      0.08843653 = sum of:
        0.06940313 = weight(_text_:data in 302) [ClassicSimilarity], result of:
          0.06940313 = score(doc=302,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46871632 = fieldWeight in 302, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=302)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 302) [ClassicSimilarity], result of:
              0.038066804 = score(doc=302,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=302)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Machine-Readable Cataloging (MARC) is currently the most broadly used bibliographic standard for encoding and exchanging bibliographic data. However, MARC may not fully support representation of the dynamic nature and semantics of digital resources because of its rigid and single-layered linear structure. The Functional Requirements for Bibliographic Records (FRBR) model, which is designed to overcome the problems of MARC, does not provide sufficient data elements and adopts a predetermined hierarchy. A flexible structure for bibliographic data with detailed data elements is needed. Integrating MARC format with the hierarchical structure of FRBR is one approach to meet this need. The purpose of this research is to propose an approach that can facilitate interoperability between MARC and FRBR by providing a conceptual structure that can function as a mediator between MARC data elements and FRBR attributes.
    Date
    10. 9.2000 17:38:22
  4. Caplan, P.; Guenther, R.: Metadata for Internet resources : the Dublin Core Metadata Elements Set and its mapping to USMARC (1996) 0.04
    0.038636878 = product of:
      0.077273756 = sum of:
        0.04138403 = weight(_text_:data in 2408) [ClassicSimilarity], result of:
          0.04138403 = score(doc=2408,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 2408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2408)
        0.03588973 = product of:
          0.07177946 = sum of:
            0.07177946 = weight(_text_:22 in 2408) [ClassicSimilarity], result of:
              0.07177946 = score(doc=2408,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4377287 = fieldWeight in 2408, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2408)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper discuesses the goals and outcome of the OCLC/NCSA Metadata Workshop held March 1-3, 1995 in Dublin Ohio. The resulting proposed "Dublin Core" Metadata Elements Set is described briefly. An attempt is made to map the Dublin Core data elements to USMARC; problems and outstanding questions are noted.
    Date
    13. 1.2007 18:31:22
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.43-58
  5. McCallum, S.H.: ¬An introduction to the Metadata Object Description Schema (MODS) (2004) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 81) [ClassicSimilarity], result of:
          0.04138403 = score(doc=81,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 81, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=81)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 81) [ClassicSimilarity], result of:
              0.050755743 = score(doc=81,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 81, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=81)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper provides an introduction to the Metadata Object Description Schema (MODS), a MARC21 compatible XML schema for descriptive metadata. It explains the requirements that the schema targets and the special features that differentiate it from MARC, such as user-oriented tags, regrouped data elements, linking, recursion, and accommodations for electronic resources.
    Source
    Library hi tech. 22(2004) no.1, S.82-88
  6. Mishra, K.S.: Bibliographic databases and exchange formats (1997) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 1757) [ClassicSimilarity], result of:
          0.04138403 = score(doc=1757,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 1757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1757)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.050755743 = score(doc=1757,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Computers play an important role in the development of bibliographic databases. Exchange formats are needed for the generation and exchange of bibliographic data at different levels: international, national, regional and local. Discusses the formats available at national and international level such as the International Standard Exchange Format (ISO 2709); the various MARC formats and the Common Communication Format (CCF). Work on Indian standards involving the Bureau of Indian Standards, the National Information System for Science and Technology (NISSAT) and other institutions proceeds only slowly
    Source
    DESIDOC bulletin of information technology. 17(1997) no.5, S.17-22
  7. Carini, P.; Shepherd, K.: ¬The MARC standard and encoded archival description (2004) 0.03
    0.03338095 = product of:
      0.0667619 = sum of:
        0.04138403 = weight(_text_:data in 2830) [ClassicSimilarity], result of:
          0.04138403 = score(doc=2830,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 2830, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2830)
        0.025377871 = product of:
          0.050755743 = sum of:
            0.050755743 = weight(_text_:22 in 2830) [ClassicSimilarity], result of:
              0.050755743 = score(doc=2830,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.30952093 = fieldWeight in 2830, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2830)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This case study details the evolution of descriptive practices and standards used in the Mount Holyoke College Archives and the Five College Finding Aids Access Project, discusses the relationship of Encoded Archival Description (EAD) and the MARC standard in reference to archival description, and addresses the challenges and opportunities of transferring data from one metadata standard to another. The study demonstrates that greater standardization in archival description allows archivists to respond more effectively to technological change.
    Source
    Library hi tech. 22(2004) no.1, S.18-27
  8. Cranefield, S.: Networked knowledge representation and exchange using UML and RDF (2001) 0.03
    0.032942846 = product of:
      0.06588569 = sum of:
        0.036211025 = weight(_text_:data in 5896) [ClassicSimilarity], result of:
          0.036211025 = score(doc=5896,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 5896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5896)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 5896) [ClassicSimilarity], result of:
              0.05934933 = score(doc=5896,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 5896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5896)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper proposes the use of the Unified Modeling Language (UML) as a language for modelling ontologies for Web resources and the knowledge contained within them. To provide a mechanism for serialising and processing object diagrams representing knowledge, a pair of XSI-T stylesheets have been developed to map from XML Metadata Interchange (XMI) encodings of class diagrams to corresponding RDF schemas and to Java classes representing the concepts in the ontologies. The Java code includes methods for marshalling and unmarshalling object-oriented information between in-memory data structures and RDF serialisations of that information. This provides a convenient mechanism for Java applications to share knowledge on the Web
  9. Andresen, L.: After MARC - what then? (2004) 0.03
    0.0314639 = product of:
      0.0629278 = sum of:
        0.043894395 = weight(_text_:data in 4751) [ClassicSimilarity], result of:
          0.043894395 = score(doc=4751,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 4751, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4751)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 4751) [ClassicSimilarity], result of:
              0.038066804 = score(doc=4751,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 4751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4751)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The article discusses the future of the MARC formats and outlines how future cataloguing practice and bibliographic records might look. Background and basic functionality of the MARC formats are outlined, and it is pointed out that MARC is manifest in several different formats. This is illustrated through a comparison between the MARC21 format and the Danish MARC format "danMARC2". It is argued that present cataloguing codes and MARC formats are based primarily on the Paris principles and that "functional requirements for bibliographic records" (FRBR) would serve as a more solid and user-oriented platform for future development of cataloguing codes and formats. Furthermore, it is argued that MARC is a library-specific format, which results in neither exchange with library external sectors nor inclusion of other texts being facilitated. XML could serve as the technical platform for a model for future registrations, consisting of some core data and different supplements of data necessary for different sectors and purposes.
    Source
    Library hi tech. 22(2004) no.1, S.40-51
  10. Gopinath, M.A.: Standardization for resource sharing databases (1995) 0.03
    0.029645886 = product of:
      0.118583545 = sum of:
        0.118583545 = sum of:
          0.067827806 = weight(_text_:processing in 4414) [ClassicSimilarity], result of:
            0.067827806 = score(doc=4414,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.35780904 = fieldWeight in 4414, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0625 = fieldNorm(doc=4414)
          0.050755743 = weight(_text_:22 in 4414) [ClassicSimilarity], result of:
            0.050755743 = score(doc=4414,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.30952093 = fieldWeight in 4414, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4414)
      0.25 = coord(1/4)
    
    Abstract
    It is helpful and essential to adopt standards for bibliographic information, project description and institutional information which are shareable for access to information resources within a country. Describes a strategy for adopting international standards of bibliographic information exchange for developing a resource sharing facilitation database in India. A list of 22 ISO standards for information processing is included
  11. Snow, M.: Visual depictions and the use of MARC : a view from the trenches of slide librarianship (1989) 0.03
    0.029208332 = product of:
      0.058416665 = sum of:
        0.036211025 = weight(_text_:data in 2862) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2862,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2862)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 2862) [ClassicSimilarity], result of:
              0.044411276 = score(doc=2862,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 2862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2862)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Paper presented at a symposium on 'Implementing the Art and Architecture Thesaurus (AAT): Controlled Vocabulary in the Extended MARC format', held at the 1989 Annual Conference of the Art Libraries Society of North America. The only way to get bibliographic records on to campus on-line library catalogues, and slide records on the national bibliographic utilities, is through the use of MARC. Discusses the importance of having individual slide and photograph records on the national bibliographic utilities, and considers the obstacles which currently make this difficult. Discusses mapping to MARC from data base management systems.
    Date
    4.12.1995 22:51:36
  12. Smith, J.K.; Cunningham, R.L.; Sarapata, S.P.: MARC to ENC MARC : bringing the collection forward (2004) 0.03
    0.029208332 = product of:
      0.058416665 = sum of:
        0.036211025 = weight(_text_:data in 2844) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2844,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2844, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2844)
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 2844) [ClassicSimilarity], result of:
              0.044411276 = score(doc=2844,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 2844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2844)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper will describe the way in which the USMARC cataloging schema is used at the Eisenhower National Clearing-house (ENC). Discussion will include how ENC MARC extensions were developed for cataloging mathematics and science curriculum resources, and how the ENC workflow is integrated into the cataloging interface. The discussion will conclude with a historical look at the in-house data transfer from ENC MARC to the current production of IEEE LOM XML encoding for record sharing and OAI compliance, required under the NSDL project guidelines.
    Source
    Library hi tech. 22(2004) no.1, S.28-39
  13. Stephens, O.: Introduction to OpenRefine (2014) 0.03
    0.029033413 = product of:
      0.11613365 = sum of:
        0.11613365 = weight(_text_:data in 2884) [ClassicSimilarity], result of:
          0.11613365 = score(doc=2884,freq=28.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.7843124 = fieldWeight in 2884, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2884)
      0.25 = coord(1/4)
    
    Abstract
    OpenRefine is described as a tool for working with 'messy' data - but what does this mean? It is probably easiest to describe the kinds of data OpenRefine is good at working with and the sorts of problems it can help you solve. OpenRefine is most useful where you have data in a simple tabular format but with internal inconsistencies either in data formats, or where data appears, or in terminology used. It can help you: Get an overview of a data set Resolve inconsistencies in a data set Help you split data up into more granular parts Match local data up to other data sets Enhance a data set with data from other sources Some common scenarios might be: 1. Where you want to know how many times a particular value appears in a column in your data. 2. Where you want to know how values are distributed across your whole data set. 3. Where you have a list of dates which are formatted in different ways, and want to change all the dates in the list to a single common date format.
  14. Delfino, E.: Data file formats for exchange of data (1993) 0.03
    0.028917972 = product of:
      0.11567189 = sum of:
        0.11567189 = weight(_text_:data in 6690) [ClassicSimilarity], result of:
          0.11567189 = score(doc=6690,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.7811939 = fieldWeight in 6690, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=6690)
      0.25 = coord(1/4)
    
    Abstract
    Discusses examples of ASCII data formats available in database programs which can be used for data exchange. Describes comma-delimited format, fixed length format, and one field per line format. Details a WordPerfect wordprocessing macro for converting data in comma-delimited files of a database system into a secondary mail merge file format of a wordprocessing package
  15. Mönch, C.; Aalberg, T.: Automatic conversion from MARC to FRBR (2003) 0.03
    0.026219916 = product of:
      0.05243983 = sum of:
        0.03657866 = weight(_text_:data in 2422) [ClassicSimilarity], result of:
          0.03657866 = score(doc=2422,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 2422, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2422)
        0.01586117 = product of:
          0.03172234 = sum of:
            0.03172234 = weight(_text_:22 in 2422) [ClassicSimilarity], result of:
              0.03172234 = score(doc=2422,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.19345059 = fieldWeight in 2422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2422)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Catalogs have for centuries been the main tool that enabled users to search for items in a library by author, title, or subject. A catalog can be interpreted as a set of bibliographic records, where each record acts as a surrogate for a publication. Every record describes a specific publication and contains the data that is used to create the indexes of search systems and the information that is presented to the user. Bibliographic records are often captured and exchanged by the use of the MARC format. Although there are numerous rdquodialectsrdquo of the MARC format in use, they are usually crafted on the same basis and are interoperable with each other -to a certain extent. The data model of a MARC-based catalog, however, is rdquo[...] extremely non-normalized with excessive replication of datardquo [1]. For instance, a literary work that exists in numerous editions and translations is likely to yield a large result set because each edition or translation is represented by an individual record, that is unrelated to other records that describe the same work.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  16. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 3841) [ClassicSimilarity], result of:
          0.031038022 = score(doc=3841,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
              0.038066804 = score(doc=3841,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 3841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3841)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This entry describes the development of the MARC Communications format. After a brief overview of the initial 10 years it describes the succeeding phases of development up to the present. This takes the reader through the expansion of the format for all types of bibliographic data and for a multiple character scripts. At the same time a large business community was developing that offered products based on the format to the library community. The introduction of the Internet in the 1990s and the Web technology brought new opportunities and challenges and the format was adapted to this new environment. There has been a great deal of international adoption of the format that has continued into the 2000s. More recently new syntaxes for MARC 21 and models are being explored.
    Date
    27. 8.2011 14:22:38
  17. Reinke, U.: ¬Der Austausch terminologischer Daten (1993) 0.02
    0.023134377 = product of:
      0.09253751 = sum of:
        0.09253751 = weight(_text_:data in 4608) [ClassicSimilarity], result of:
          0.09253751 = score(doc=4608,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.6249551 = fieldWeight in 4608, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4608)
      0.25 = coord(1/4)
    
    Abstract
    Diplomarbeit at the University of Saarbrücken which contains the following topics: data exchange format; terminology management systems; terminological databases; terminological record; data elements; data categories; data fields, etc.: hard- and software-related difficulties for the structure of records; description of approaches for the development of an exchange format for terminological data (MATER, MicroMATER, NTRF, SGML); considerations concerning an SGML-like exchange format; perspectives
  18. Guenther, R.S.: Automating the Library of Congress Classification Scheme : implementation of the USMARC format for classification data (1996) 0.02
    0.022174634 = product of:
      0.088698536 = sum of:
        0.088698536 = weight(_text_:data in 5578) [ClassicSimilarity], result of:
          0.088698536 = score(doc=5578,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.59902847 = fieldWeight in 5578, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5578)
      0.25 = coord(1/4)
    
    Abstract
    Potential uses for classification data in machine readable form and reasons for the development of a standard, the USMARC Format for Classification Data, which allows for classification data to interact with other USMARC bibliographic and authority data are discussed. The development, structure, content, and use of the standard is reviewed with implementation decisions for the Library of Congress Classification scheme noted. The author examines the implementation of USMARC classification at LC, the conversion of the schedules, and the functionality of the software being used. Problems in the effort are explored, and enhancements desired for the online classification system are considered.
    Object
    USMARC for classification data
  19. Woods, E.W.; IFLA Section on classification and Indexing and Indexing and Information Technology; Joint Working Group on a Classification Format: Requirements for a format of classification data : Final report, July 1996 (1996) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 3008) [ClassicSimilarity], result of:
          0.08778879 = score(doc=3008,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 3008, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
      0.25 = coord(1/4)
    
    Object
    USMARC for classification data
  20. Zapounidou, S.; Sfakakis, M.; Papatheodorou, C.: Library data integration : towards BIBFRAME mapping to EDM (2014) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 1589) [ClassicSimilarity], result of:
          0.08778879 = score(doc=1589,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 1589, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1589)
      0.25 = coord(1/4)
    
    Abstract
    Integration of library data into the Linked Data environment is a key issue in libraries and is approached on the basis of interoperability between library data conceptual models. Achieving interoperability for different representations of the same or related entities between the library and cultural heritage domains shall enhance rich bibliographic data reusability and support the development of new data-driven information services. This paper aims to contribute to the desired interoperability by attempting to map core semantic paths between the BIBFRAME and EDM conceptual models. BIBFRAME is developed by the Library of Congress to support transformation of legacy library data in MARC format into linked data. EDM is the model developed for and used in the Europeana Cultural Heritage aggregation portal.

Authors

Years

Languages

  • e 133
  • d 15
  • f 6
  • nl 2
  • sp 2
  • pl 1
  • More… Less…

Types

  • a 132
  • el 14
  • m 7
  • s 6
  • l 3
  • n 3
  • ? 2
  • b 2
  • r 2
  • x 1
  • More… Less…

Classifications