Search (247 results, page 13 of 13)

  • × theme_ss:"Datenformate"
  1. Guenther, R.; McCallum, S.: New metadata standards for digital resources : MODS and METS (2003) 0.00
    0.002245818 = product of:
      0.01122909 = sum of:
        0.01122909 = weight(_text_:information in 1250) [ClassicSimilarity], result of:
          0.01122909 = score(doc=1250,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.13576832 = fieldWeight in 1250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1250)
      0.2 = coord(1/5)
    
    Source
    Bulletin of the American Society for Information Science. 29(2003) no.2, S.11-15
  2. Galvão, R.M.: UNIMARC format relevance : maintenance or replacement? (2018) 0.00
    0.002245818 = product of:
      0.01122909 = sum of:
        0.01122909 = weight(_text_:information in 5163) [ClassicSimilarity], result of:
          0.01122909 = score(doc=5163,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.13576832 = fieldWeight in 5163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5163)
      0.2 = coord(1/5)
    
    Abstract
    This article presents an empirical study focused on a qualitative analysis of the UNIMARC format. An analysis of the structural quality of the data provided by the format is evaluated to determine its current suitability for meeting the requirements and trends in data architecture for the information network and the Semantic Web. Driven by a set of quality characteristics that identify weaknesses in the data schema that cannot be bridged by simply converting data to MARC XML or RDF/XML, we conclude that the UNIMARC format is not compliant with the current metadata schema desiderata and must be replaced.
  3. Simmons, P.: Converting UNIMARC records to CCF (1989) 0.00
    0.0019249868 = product of:
      0.009624934 = sum of:
        0.009624934 = weight(_text_:information in 2515) [ClassicSimilarity], result of:
          0.009624934 = score(doc=2515,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.116372846 = fieldWeight in 2515, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2515)
      0.2 = coord(1/5)
    
    Abstract
    One of the primary goals of Unesco's Common Communication Format (CCF) has been to maintain compatibility between two major information communities: libraries; and abstracting and indexing organisations. While abstracting and indexing organisations do not follow any single standard for description or the structure and encoding of machine records, libraries have clearly defined standards and practices. Among CCF-using organisations are some that wish to incorporate records produced by national bibliographic agencies, especially national libraries, into their own data bases. They need the ability to convert UNIMARC records to CCF. To accomplish this they require a source of records, a computer to process them, a computer programm designed for record conversion and a table or instructions laying out the specific way in which each UNIMARC data element is to be processed in the course of conversion to CCF. Examines the factors to be considered in planning a table that would be sufficiently detailed to accomplish record conversion, and outlines problems that might be encountered.
  4. Fattahi, R.: ¬A uniform approach to the indexing of cataloguing data in online library systems (1997) 0.00
    0.0019249868 = product of:
      0.009624934 = sum of:
        0.009624934 = weight(_text_:information in 131) [ClassicSimilarity], result of:
          0.009624934 = score(doc=131,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.116372846 = fieldWeight in 131, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=131)
      0.2 = coord(1/5)
    
    Abstract
    Argues that in library cataloguing and for optional functionality of bibliographic records the indexing of fields and subfields should follow a uniform approach. This would maintain effectiveness in searching, retrieval and display of bibliographic information both within systems and between systems. However, a review of different postings to the AUTOCAT and USMARC discussion lists indicates that the indexing and tagging of cataloguing data do not, at present, follow a consistent approach in online library systems. If the rationale of cataloguing principles is to bring uniformity in bibliographic description and effectiveness in access, they should also address the question of uniform approaches to the indexing of cataloguing data. In this context and in terms of the identification and handling of data elements, cataloguing standards (codes, MARC formats and the Z39.50 standard) should be brought closer, in that they should provide guidelines for the designation of data elements for machine readable records
  5. Miller, E.; Ogbuji, U.: Linked data design for the visible library (2015) 0.00
    0.0019249868 = product of:
      0.009624934 = sum of:
        0.009624934 = weight(_text_:information in 2773) [ClassicSimilarity], result of:
          0.009624934 = score(doc=2773,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.116372846 = fieldWeight in 2773, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2773)
      0.2 = coord(1/5)
    
    Source
    Bulletin of the Association for Information Science and Technology. 41(2015) no.4, S.23-29
  6. Xu, A.; Hess, K.; Akerman, L.: From MARC to BIBFRAME 2.0 : Crosswalks (2018) 0.00
    0.0016041556 = product of:
      0.008020778 = sum of:
        0.008020778 = weight(_text_:information in 5172) [ClassicSimilarity], result of:
          0.008020778 = score(doc=5172,freq=2.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.09697737 = fieldWeight in 5172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5172)
      0.2 = coord(1/5)
    
    Abstract
    One of the big challenges facing academic libraries today is to increase the relevance of the libraries to their user communities. If the libraries can increase the visibility of their resources on the open web, it will increase the chances of the libraries to reach to their user communities via the user's first search experience. BIBFRAME and library Linked Data will enable libraries to publish their resources in a way that the Web understands, consume Linked Data to enrich their resources relevant to the libraries' user communities, and visualize networks across collections. However, one of the important steps for transitioning to BIBFRAME and library Linked Data involves crosswalks, mapping MARC fields and subfields across data models and performing necessary data reformatting to be in compliance with the specifications of the new model, which is currently BIBFRAME 2.0. This article looks into how the Library of Congress has mapped library bibliographic data from the MARC format to the BIBFRAME 2.0 model and vocabulary published and updated since April 2016, available from http://www.loc.gov/bibframe/docs/index.html based on the recently released conversion specifications and converter, developed by the Library of Congress with input from many community members. The BIBFRAME 2.0 standard and conversion tools will enable libraries to transform bibliographic data from MARC into BIBFRAME 2.0, which introduces a Linked Data model as the improved method of bibliographic control for the future, and make bibliographic information more useful within and beyond library communities.
  7. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.00
    0.0013892398 = product of:
      0.0069461986 = sum of:
        0.0069461986 = weight(_text_:information in 1166) [ClassicSimilarity], result of:
          0.0069461986 = score(doc=1166,freq=6.0), product of:
            0.08270773 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.047114085 = queryNorm
            0.083984874 = fieldWeight in 1166, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.2 = coord(1/5)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.

Authors

Years

Languages

  • e 129
  • d 104
  • f 6
  • pl 1
  • sp 1
  • More… Less…

Types

  • a 193
  • m 26
  • s 15
  • el 13
  • x 5
  • n 4
  • b 2
  • l 2
  • ? 1
  • r 1
  • More… Less…