Search (79 results, page 4 of 4)

  • × language_ss:"e"
  • × theme_ss:"Datenformate"
  1. MacCallum, S.H.: Harmonization of USMARC, CANMARC, and UKMARC (2000) 0.01
    0.0070806327 = product of:
      0.035403162 = sum of:
        0.035403162 = weight(_text_:22 in 185) [ClassicSimilarity], result of:
          0.035403162 = score(doc=185,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.19345059 = fieldWeight in 185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=185)
      0.2 = coord(1/5)
    
    Date
    10. 9.2000 17:38:22
  2. Simmons, P.: Preserving compatibility with standard data formats (1994) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 7129) [ClassicSimilarity], result of:
          0.03381079 = score(doc=7129,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 7129, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7129)
      0.2 = coord(1/5)
    
    Abstract
    Librarians in countries without well-established national bibliographic systems increasingly find themselves faced with the problem of establishing local formats for machine-readable cataloguing and for referral data. Often they lack the background and the resources - especially trained staff - either to adopt an existing MARC format or to develop their own. Such international formats as UNIMARC and CCF, despite widespread international use, present problems of their own; MARC formats are not practical for agencies that do not follow standard cataloguing rules, and CCF offers little guidance to agencies wishing to adopt it for local use. A number of techniques useful in adapting and implementing international and national standard formats are presented, with some guidelines for preserving compatibility with standards
  3. Guenther, R.S.: ¬The Library of Congress Classification in the USMARC format (1994) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 8864) [ClassicSimilarity], result of:
          0.03381079 = score(doc=8864,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 8864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=8864)
      0.2 = coord(1/5)
    
    Abstract
    The paper reviews the development of the USMARC Format for Classification Data, a standard for communication of classification data in machine-readable form. It considers the uses for online classification schedules, both for technical services and reference functions and gives an overview of the format specification details of data elements used and of the structure of the records. The paper describes an experiment conducted at the Library of Congress to test the format as well as the development of the classification database encompassing the LCC schedules. Features of the classification system are given. The LoC will complete its conversion of the LCC in mid-1995
  4. Riemer, J.J.: Adding 856 Fields to authority records : rationale and implications (1998) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 3715) [ClassicSimilarity], result of:
          0.03381079 = score(doc=3715,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 3715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3715)
      0.2 = coord(1/5)
    
    Abstract
    Discusses ways of applying MARC Field 856 (Electronic Location and Access) to authority records in online union catalogues. In principle, each catalogue site location can be treated as the electronic record of the work concerned and the MARC Field 856 can then refer to this location as if it were referring to the location of a primary record. Although URLs may become outdated, the fact that they are located in specifically defined MARC Fields makes the data contained amenable to the same link maintenance software ae used for the electronic records themselves. Includes practical examples of typical union catalogue records incorporating MARC Field 856
  5. McCallum, S.H.: MARCXML sampler (2005) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 4361) [ClassicSimilarity], result of:
          0.03381079 = score(doc=4361,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 4361, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4361)
      0.2 = coord(1/5)
    
    Abstract
    At the IFLA conference in Glasgow, three years ago, the Information Technology Section organized a workshop on metadata. At that workshop MARCXML was presented, along with plans and expectations for its use. This paper is an update to that report. It reviews the development of an XML schema for MARC 21 and the MARCXML tool kit of transformations. The close relationship of MARCXML to the recent ISO standards work associated with MARC in XML is described. Sketches of interesting applications follow with uses that range from MARCXML as a switching format to a maintenance tool to a record communication format for new XML-based protocols.
  6. Sandberg-Fox, A.M.: ¬The microcomputer revolution (2001) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 5409) [ClassicSimilarity], result of:
          0.03381079 = score(doc=5409,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 5409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5409)
      0.2 = coord(1/5)
    
    Abstract
    With the introduction of the microcomputer in the 1980s, a revolution of sorts was initiated. In libraries this was evidenced by the acquisition of personal computers and the software to run on them. All that catalogers needed were cataloging rules and a MARC format to ensure their bibliographic control. However, little did catalogers realize they were dealing with an industry that introduced rapid technological changes, which effected continual revision of existing rules and the formulation of special guidelines to deal with the industry's innovative products. This article focuses on the attempts of libraries and organized cataloging groups to develop the Chapter 9 descriptive cataloging rules in AACR2; it highlights selected events and includes cataloging examples that illustrate the evolution of the chapter.
  7. Conklin, C.E.: Australia: The ABN, ANB, AUSMARC and the National Library (1988) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 420) [ClassicSimilarity], result of:
          0.03381079 = score(doc=420,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=420)
      0.2 = coord(1/5)
    
    Abstract
    Australian libraries have kept up with the latest technology and innovation in cataloging. The Australian Bibliographic Network is a shared cataloging-based national bibliographic utility. This essay delves into the relationships of the ABN, the Australian National Bibliography, AUSMARC, and the role of the Australian National Library in the creation of these elements of computerized cataloging in that country. It also discusses some of the policies and procedures utilized by Australian libraries in their automated cataloging environment, as well as looking at some of the environmental attitudes arising from the change to automated cataloging. Finally, the essay concludes with an outline of some of the similarities and differences between the AUSMARC and USMARC formats.
  8. Samples, J.; Bigelow, I.: MARC to BIBFRAME : converting the PCC to Linked Data (2020) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 119) [ClassicSimilarity], result of:
          0.03381079 = score(doc=119,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 119, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=119)
      0.2 = coord(1/5)
    
    Abstract
    The Program for Cooperative Cataloging (PCC) has formal relationships with the Library of Congress (LC), Share-VDE, and Linked Data for Production Phase 2 (LD4P2) for work on Bibliographic Framework (BIBFRAME), and PCC institutions have been very active in the exploration of MARC to BIBFRAME conversion processes. This article will review the involvement of PCC in the development of BIBFRAME and examine the work of LC, Share-VDE, and LD4P2 on MARC to BIBFRAME conversion. It will conclude with a discussion of areas for further exploration by the PCC leading up to the creation of PCC conversion specifications and PCC BIBFRAME data.
  9. Giordano, R.: ¬The documentation of electronic texts : using Text Encoding Initiative headers: an introduction (1994) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 866) [ClassicSimilarity], result of:
          0.028980678 = score(doc=866,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=866)
      0.2 = coord(1/5)
    
    Abstract
    Presents a general introduction to the form and functions of the Text Encoding Initiative (TEI) headers and explains their relationship to the MARC record. The TEI header's main strength is that it documents electronic texts in a standard exchange format that should be understandable to both librarian cataloguers and text encoders outside of librarianship. TEI gives encoders the ability to document the the electronic text itself, its source, its encoding principles, and revisions, as well as non bibliographic characteristics of the text that can support both scholarly analysis and retrieval. Its bibliographic descriptions can be loaded into standard remote bibliographic databases, which should make electronic texts as easy to find for researchers as texts in other media. Presents a brief overview of the TEI header, the file description and ways in which the TEI headers have counterparts in MARC, the Encoding Description, the Profile Description, the Revision Description, the size and complexity of the TEI header, and the use of the TEI header to support document retrieval and analysis, with notes on some of the prospects and problems
  10. Horah, J.L.: from cards to the Web : ¬The evolution of a library database (1998) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 4842) [ClassicSimilarity], result of:
          0.028980678 = score(doc=4842,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 4842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=4842)
      0.2 = coord(1/5)
    
    Abstract
    The Jack Brause Library at New York University (NYU) is a special library supporting the curriculum of NYU's Real Estate Institute. The Jack Brause Library (JBL) Real estate Periodical Index was established in 1990 and draws on the library's collection of over 140 real estate periodicals. Describes the conversion of the JBL Index from a 3x5 card index to an online resource. The database was originally created using Rbase for DOS but this quickly became obsolete and in 1993 was replaced with InMagic. In 1997 the JBL Index was made available on NYU's telnet catalogue, BobCat, and the Internet database catalogue, BobCatPlus. The transition of InMagic data to USMARC formatted records involved a 3-step process: data normalization; adding value; and data recording. The Index has been operational through telnet since May 1997 and installing it onto the Web became functional in Oct 1997
  11. Taylor, M.; Dickmeiss, A.: Delivering MARC/XML records from the Library of Congress catalogue using the open protocols SRW/U and Z39.50 (2005) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 4350) [ClassicSimilarity], result of:
          0.028980678 = score(doc=4350,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 4350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
      0.2 = coord(1/5)
    
    Abstract
    The MARC standard for representing catalogue records and the Z39.50 standard for locating and retrieving them have facilitated interoperability in the library domain for more than a decade. With the increasing ubiquity of XML, these standards are being superseded by MARCXML and MarcXchange for record representation and SRW/U for searching and retrieval. Service providers moving from the older standards to the newer generally need to support both old and new forms during the transition period. YAZ Proxy uses a novel approach to provide SRW/MARCXML access to the Library of Congress catalogue, by translating requests into Z39.50 and querying the older system directly. As a fringe benefit, it also greatly accelerates Z39.50 access.
  12. McBride, J.L.: Faceted subject access for music through USMARC : a case for linked fields (2000) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 5403) [ClassicSimilarity], result of:
          0.028980678 = score(doc=5403,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 5403, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=5403)
      0.2 = coord(1/5)
    
    Abstract
    The USMARC Format for Bibliographic Description contains three fields (045, 047, and 048) designed to facilitate subject access to music materials. The fields cover three of the main aspects of subject description for music: date of composition, form or genre, and number of instruments or voices, respectively. The codes are rarely used for subject access, because of the difficulty of coding them and because false drops would result in retrieval of bibliographic records where more than one musical work is present, a situation that occurs frequently with sound recordings. It is proposed that the values of the fields be converted to natural language and that subfield 8 be used to link all access fields in a bibliographic record for greater precision in retrieval. This proposal has implications beyond music cataloging, especially for metadata and any bibliographic records describing materials containing many works and subjects.
  13. SKOS Core Guide (2005) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 4689) [ClassicSimilarity], result of:
          0.028980678 = score(doc=4689,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=4689)
      0.2 = coord(1/5)
    
    Abstract
    SKOS Core provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, 'folksonomies', other types of controlled vocabulary, and also concept schemes embedded in glossaries and terminologies. The SKOS Core Vocabulary is an application of the Resource Description Framework (RDF), that can be used to express a concept scheme as an RDF graph. Using RDF allows data to be linked to and/or merged with other data, enabling data sources to be distributed across the web, but still be meaningfully composed and integrated. This document is a guide using the SKOS Core Vocabulary, for readers who already have a basic understanding of RDF concepts. This edition of the SKOS Core Guide [SKOS Core Guide] is a W3C Public Working Draft. It is the authoritative guide to recommended usage of the SKOS Core Vocabulary at the time of publication.
  14. Boehr, D.L.; Bushman, B.: Preparing for the future : National Library of Medicine's® project to add MeSH® RDF URIs to its bibliographic and authority records (2018) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 5173) [ClassicSimilarity], result of:
          0.028980678 = score(doc=5173,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 5173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=5173)
      0.2 = coord(1/5)
    
    Abstract
    Although it is not yet known for certain what will replace MARC, eventually bibliographic data will need to be transformed to move into a linked data environment. This article discusses why the National Library of Medicine chose to add Uniform Resource Identifiers for Medical Subject Headings as our starting point and details the process by which they were added to the MeSH MARC authority records, the legacy bibliographic records, and the records for newly cataloged items. The article outlines the various enhancement methods available, decisions made, and the rationale for the selected method.
  15. Salgáné, M.M.: Our electronic era and bibliographic informations computer-related bibliographic data formats, metadata formats and BDML (2005) 0.01
    0.005464649 = product of:
      0.027323244 = sum of:
        0.027323244 = weight(_text_:it in 3005) [ClassicSimilarity], result of:
          0.027323244 = score(doc=3005,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.18076637 = fieldWeight in 3005, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=3005)
      0.2 = coord(1/5)
    
    Abstract
    Using new communication technologies libraries must face continuously new questions, possibilities and expectations. This study discusses library-related aspects of our electronic era and how computer-related data formats affect bibliographic dataprocessing to give a summary of the most important results. First bibliographic formats for the exchange of bibliographic and related information in the machine-readable form between different types of computer systems were created more than 30 years ago. The evolution of information technologies leads to the improvement of computer systems. In addition to the development of computers and media types Internet has a great influence on data structure as well. Since the introduction of MARC bibliographic format, technology of data exchange between computers and between different computer systems has reached a very sophisticated stage and has contributed to the creation of new standards in this field. Today libraries work with this new infrastructure that induces many challenges. One of the most significant challenges is moving from a relatively homogenous bibliographic environment to a diverse one. Despite these challenges such changes are achievable and necessary to exploit possibilities of new metadata and technologies like the Internet and XML (Extensible Markup Language). XML is an open standard, a universal language for data on the Web. XML is nearly six-years-old standard designed for the description and computer-based management of (semi)-structured data and structured texts. XML gives developers the power to deliver structured data from a wide variety of applications and it is also an ideal format from server-to-server transfer of structured data. XML also isn't limited for Internet use and is an especially valuable tool in the field of library. In fact, XML's main strength - organizing information - makes it perfect for exchanging data between different systems. Tools that work with the XML can be used to process XML records without incurring additional costs associated with one's own software development. In addition, XML is also a suitable format for library web services. The Department of Computer-related Graphic Design and Library and Information Sciences of Debrecen University launched the BDML (Bibliographic Description Markup Language) development project in order to standardize bibliogrphic description with the help of XML.
  16. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.01
    0.005019601 = product of:
      0.025098003 = sum of:
        0.025098003 = weight(_text_:it in 1169) [ClassicSimilarity], result of:
          0.025098003 = score(doc=1169,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.16604452 = fieldWeight in 1169, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
      0.2 = coord(1/5)
    
    Abstract
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
  17. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 1467) [ClassicSimilarity], result of:
          0.024150565 = score(doc=1467,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 1467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
      0.2 = coord(1/5)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized
  18. Xu, A.; Hess, K.; Akerman, L.: From MARC to BIBFRAME 2.0 : Crosswalks (2018) 0.00
    0.004830113 = product of:
      0.024150565 = sum of:
        0.024150565 = weight(_text_:it in 5172) [ClassicSimilarity], result of:
          0.024150565 = score(doc=5172,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 5172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5172)
      0.2 = coord(1/5)
    
    Abstract
    One of the big challenges facing academic libraries today is to increase the relevance of the libraries to their user communities. If the libraries can increase the visibility of their resources on the open web, it will increase the chances of the libraries to reach to their user communities via the user's first search experience. BIBFRAME and library Linked Data will enable libraries to publish their resources in a way that the Web understands, consume Linked Data to enrich their resources relevant to the libraries' user communities, and visualize networks across collections. However, one of the important steps for transitioning to BIBFRAME and library Linked Data involves crosswalks, mapping MARC fields and subfields across data models and performing necessary data reformatting to be in compliance with the specifications of the new model, which is currently BIBFRAME 2.0. This article looks into how the Library of Congress has mapped library bibliographic data from the MARC format to the BIBFRAME 2.0 model and vocabulary published and updated since April 2016, available from http://www.loc.gov/bibframe/docs/index.html based on the recently released conversion specifications and converter, developed by the Library of Congress with input from many community members. The BIBFRAME 2.0 standard and conversion tools will enable libraries to transform bibliographic data from MARC into BIBFRAME 2.0, which introduces a Linked Data model as the improved method of bibliographic control for the future, and make bibliographic information more useful within and beyond library communities.
  19. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.00
    0.0034154055 = product of:
      0.017077027 = sum of:
        0.017077027 = weight(_text_:it in 1166) [ClassicSimilarity], result of:
          0.017077027 = score(doc=1166,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.11297898 = fieldWeight in 1166, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.2 = coord(1/5)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).

Authors

Years

Types

  • a 67
  • el 5
  • s 4
  • b 2
  • m 2
  • n 2
  • ? 1
  • More… Less…