Search (80 results, page 1 of 4)

  • × theme_ss:"Datenformate"
  • × year_i:[2000 TO 2010}
  1. Taylor, M.; Dickmeiss, A.: Delivering MARC/XML records from the Library of Congress catalogue using the open protocols SRW/U and Z39.50 (2005) 0.01
    0.008164991 = product of:
      0.03810329 = sum of:
        0.016133383 = weight(_text_:system in 4350) [ClassicSimilarity], result of:
          0.016133383 = score(doc=4350,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.20878783 = fieldWeight in 4350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
        0.0070881573 = weight(_text_:information in 4350) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=4350,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 4350, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
        0.014881751 = weight(_text_:retrieval in 4350) [ClassicSimilarity], result of:
          0.014881751 = score(doc=4350,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.20052543 = fieldWeight in 4350, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=4350)
      0.21428572 = coord(3/14)
    
    Abstract
    The MARC standard for representing catalogue records and the Z39.50 standard for locating and retrieving them have facilitated interoperability in the library domain for more than a decade. With the increasing ubiquity of XML, these standards are being superseded by MARCXML and MarcXchange for record representation and SRW/U for searching and retrieval. Service providers moving from the older standards to the newer generally need to support both old and new forms during the transition period. YAZ Proxy uses a novel approach to provide SRW/MARCXML access to the Library of Congress catalogue, by translating requests into Z39.50 and querying the older system directly. As a fringe benefit, it also greatly accelerates Z39.50 access.
    Footnote
    Vortrag, World Library and Information Congress: 71th IFLA General Conference and Council "Libraries - A voyage of discovery", August 14th - 18th 2005, Oslo, Norway.
    Series
    121 UNIMARC with Information Technology ; 065-E
  2. ISO 25964 Thesauri and interoperability with other vocabularies (2008) 0.01
    0.0072107157 = product of:
      0.033650007 = sum of:
        0.011408024 = weight(_text_:system in 1169) [ClassicSimilarity], result of:
          0.011408024 = score(doc=1169,freq=4.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.14763528 = fieldWeight in 1169, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.005603681 = weight(_text_:information in 1169) [ClassicSimilarity], result of:
          0.005603681 = score(doc=1169,freq=10.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1301088 = fieldWeight in 1169, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
        0.016638303 = weight(_text_:retrieval in 1169) [ClassicSimilarity], result of:
          0.016638303 = score(doc=1169,freq=10.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.22419426 = fieldWeight in 1169, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1169)
      0.21428572 = coord(3/14)
    
    Abstract
    T.1: Today's thesauri are mostly electronic tools, having moved on from the paper-based era when thesaurus standards were first developed. They are built and maintained with the support of software and need to integrate with other software, such as search engines and content management systems. Whereas in the past thesauri were designed for information professionals trained in indexing and searching, today there is a demand for vocabularies that untrained users will find to be intuitive. ISO 25964 makes the transition needed for the world of electronic information management. However, part 1 retains the assumption that human intellect is usually involved in the selection of indexing terms and in the selection of search terms. If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved. This is the main principle underlying thesaurus design, even though a thesaurus built for human users may also be applied in situations where computers make the choices. Efficient exchange of data is a vital component of thesaurus management and exploitation. Hence the inclusion in this standard of recommendations for exchange formats and protocols. Adoption of these will facilitate interoperability between thesaurus management systems and the other computer applications, such as indexing and retrieval systems, that will utilize the data. Thesauri are typically used in post-coordinate retrieval systems, but may also be applied to hierarchical directories, pre-coordinate indexes and classification systems. Increasingly, thesaurus applications need to mesh with others, such as automatic categorization schemes, free-text search systems, etc. Part 2 of ISO 25964 describes additional types of structured vocabulary and gives recommendations to enable interoperation of the vocabularies at all stages of the information storage and retrieval process.
    T.2: The ability to identify and locate relevant information among vast collections and other resources is a major and pressing challenge today. Several different types of vocabulary are in use for this purpose. Some of the most widely used vocabularies were designed a hundred years ago and have been evolving steadily. A different generation of vocabularies is now emerging, designed to exploit the electronic media more effectively. A good understanding of the previous generation is still essential for effective access to collections indexed with them. An important object of ISO 25964 as a whole is to support data exchange and other forms of interoperability in circumstances in which more than one structured vocabulary is applied within one retrieval system or network. Sometimes one vocabulary has to be mapped to another, and it is important to understand both the potential and the limitations of such mappings. In other systems, a thesaurus is mapped to a classification scheme, or an ontology to a thesaurus. Comprehensive interoperability needs to cover the whole range of vocabulary types, whether young or old. Concepts in different vocabularies are related only in that they have the same or similar meaning. However, the meaning can be found in a number of different aspects within each particular type of structured vocabulary: - within terms or captions selected in different languages; - in the notation assigned indicating a place within a larger hierarchy; - in the definition, scope notes, history notes and other notes that explain the significance of that concept; and - in explicit relationships to other concepts or entities within the same vocabulary. In order to create mappings from one structured vocabulary to another it is first necessary to understand, within the context of each different type of structured vocabulary, the significance and relative importance of each of the different elements in defining the meaning of that particular concept. ISO 25964-1 describes the key characteristics of thesauri along with additional advice on best practice. ISO 25964-2 focuses on other types of vocabulary and does not attempt to cover all aspects of good practice. It concentrates on those aspects which need to be understood if one of the vocabularies is to work effectively alongside one or more of the others. Recognizing that a new standard cannot be applied to some existing vocabularies, this part of ISO 25964 provides informative description alongside the recommendations, the aim of which is to enable users and system developers to interpret and implement the existing vocabularies effectively. The remainder of ISO 25964-2 deals with the principles and practicalities of establishing mappings between vocabularies.
    Issue
    Pt.1: Thesauri for information retrieval - Pt.2: Interoperability with other vocabularies.
  3. Coyle, K.: Future considerations : the functional library systems record (2004) 0.00
    0.004972478 = product of:
      0.034807343 = sum of:
        0.021511177 = weight(_text_:system in 562) [ClassicSimilarity], result of:
          0.021511177 = score(doc=562,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.27838376 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=562)
        0.0132961655 = product of:
          0.026592331 = sum of:
            0.026592331 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.026592331 = score(doc=562,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.30952093 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The paper performs a thought experiment on the concept of a record based on the Functional Requirements for Bibliographic Records and library system functions, and concludes that if we want to develop a functional bibliographic record we need to do it within the context of a flexible, functional library systems record structure. The article suggests a new way to look at the library systems record that would allow libraries to move forward in terms of technology but also in terms of serving library users.
    Source
    Library hi tech. 22(2004) no.2, S.166-174
  4. Johnson, B.C.: XML and MARC : which is "right"? (2001) 0.00
    0.003927156 = product of:
      0.02749009 = sum of:
        0.010128049 = weight(_text_:information in 5423) [ClassicSimilarity], result of:
          0.010128049 = score(doc=5423,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 5423, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
        0.017362041 = weight(_text_:retrieval in 5423) [ClassicSimilarity], result of:
          0.017362041 = score(doc=5423,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.23394634 = fieldWeight in 5423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
      0.14285715 = coord(2/14)
    
    Abstract
    This article explores recent discussions about appropriate mark-up conventions for library information intended for use on the World Wide Web. In particular, the question of whether the MARC 21 format will continue to be useful and whether the time is right for a full-fledged conversion effort to XML is explored. The author concludes that the MARC format will be relevant well into the future, and its use will not hamper access to bibliographic information via the web. Early XML exploratory efforts carried out at the Stanford University's Lane Medical Library are reported on. Although these efforts are a promising start, much more consultation and investigation is needed to arrive at broadly acceptable standards for XML library information encoding and retrieval.
  5. Oehlschläger, S.: Arbeitsgemeinschaft der Verbundsysteme : Aus der 46. Sitzung am 21. und 22. April 2004 im Bibliotheksservice-Zentrum Baden-Württemberg in Konstanz (2004) 0.00
    0.0038896988 = product of:
      0.018151928 = sum of:
        0.009411139 = weight(_text_:system in 2434) [ClassicSimilarity], result of:
          0.009411139 = score(doc=2434,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.1217929 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2434)
        0.0029237159 = weight(_text_:information in 2434) [ClassicSimilarity], result of:
          0.0029237159 = score(doc=2434,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.06788416 = fieldWeight in 2434, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2434)
        0.0058170725 = product of:
          0.011634145 = sum of:
            0.011634145 = weight(_text_:22 in 2434) [ClassicSimilarity], result of:
              0.011634145 = score(doc=2434,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.1354154 = fieldWeight in 2434, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2434)
          0.5 = coord(1/2)
      0.21428572 = coord(3/14)
    
    Content
    "Die verbundübergreifende Fernleihe für Monografien steht kurz vor ihrer flächendeckenden Einführung. Voraussetzung hierfür ist ein funktionierendes Online-Fernleih-System in jedem Verbund. Dies ist prototypisch realisiert. Die Arbeitsgemeinschaft der Verbundsysteme geht davon aus, dass ab dem 1. Januar 2005 der Echtbetrieb aufgenommen werden kann und die Leistungen nach der neuen Leihverkehrsordnung abgerechnet werden können. Zur Klärung von Detailfragen trifft sich im Juni die Arbeitsgruppe Verbundübergreifende Fernleihe. Bereits in der letzten Sitzung wurde festgelegt, dass die jeweiligen Bibliotheken über die Festlegung des Leitwegs entscheiden sollen, und die Verbundzentralen nur dann eingreifen werden, wenn Probleme entstehen sollten. Die individuelle Leitwegsteuerung, sowohl innerhalb des Verbundes als auch bei der Festlegung der Reihenfolge der anzugehenden Verbünde hat in einigen Verbünden hohe Priorität. Traditionell gewachsene Beziehungen müssen von den Bestellsystemen abgebildet werden können. Eine lokale Zusammenarbeit wird auch über Verbundgrenzen hinaus möglich sein. Im Hinblick auf die Verrechnung verbundübergreifender Fernleihen haben sich die Verbünde auf einen einheitlichen Verrechnungszeitraum geeinigt. Voraussetzung ist außerdem, dass die Unterhaltsträger die notwendigen Rahmenbedingungen für die Abrechnung schaffen und die neue Leihverkehrsordnung in allen Bundesländern rechtzeitig in Kraft gesetzt wird."
    - Projekt Umstieg auf internationale Formate und Regelwerke (MARC21, AACR2) Das Projekt Umstieg auf internationale Formate und Regelwerke (MARC21, AACR2) stand zum Zeitpunkt der Sitzung der Arbeitsgemeinschaft kurz vor seinem Abschluss. Im Rahmen der Veranstaltung des Standardisierungsausschusses beim 2. Leipziger Kongress für Information und Bibliothek wurden die wesentlichen Projektergebnisse vorgestellt. Aufgrund der vorliegenden Informationen gehen die Mitglieder der Arbeitsgemeinschaft der Verbundsysteme davon aus, dass das finanzielle Argument bei der anstehenden Entscheidung nicht mehr im Vordergrund stehen kann. Auch wenn davon ausgegangen wird, dass eine klare Umstiegsentscheidung durch den Standardisierungsausschuss derzeit politisch nicht durchsetzbar sei, sehen die Mitglieder der Arbeitsgemeinschaft der Verbundsysteme die Entwicklung durch die Projektergebnisse positiv. Durch die Diskussion wurden Defizite des deutschen Regelwerks und der Verbundpraxis offen gelegt und verschiedene Neuerungen angestoßen. Zur Verbesserung des Datentausches untereinander sehen die Verbundzentralen unabhängig von einer Entscheidung des Standardisierungsausschusses die Notwendigkeit, ihre Datenbestände zu homogenisieren und Hierarchien abzubauen bzw. die Verknüpfungsstrukturen zu vereinfachen. Auch die Entwicklung der Functional Requirements for Bibliographic Records (FRBR) muss in diese Überlegungen einbezogen werden. Die Formate müssen dahingehend entwickelt werden, dass alle relevanten Informationen im Titelsatz transportiert werden können. Es wird eine Konvergenz von Regelwerk und Format angestrebt.
  6. Riva, P.: Mapping MARC 21 linking entry fields to FRBR and Tillett's taxonomy of bibliographic relationships (2004) 0.00
    0.0037293585 = product of:
      0.026105508 = sum of:
        0.016133383 = weight(_text_:system in 136) [ClassicSimilarity], result of:
          0.016133383 = score(doc=136,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.20878783 = fieldWeight in 136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=136)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 136) [ClassicSimilarity], result of:
              0.019944249 = score(doc=136,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 136, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=136)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Bibliographic relationships have taken on even greater importance in the context of ongoing efforts to integrate concepts from the Functional Requirements for Bibliographic Records (FRBR) into cataloging codes and database structures. In MARC 21, the linking entry fields are a major mechanism for expressing relationships between bibliographic records. Taxonomies of bibliographic relationships have been proposed by Tillett, with an extension by Smiraglia, and in FRBR itself. The present exercise is to provide a detailed bidirectional mapping of the MARC 21 linking fields to these two schemes. The correspondence of the Tillett taxonomic divisions to the MARC categorization of the linking fields as chronological, horizontal, or vertical is examined as well. Application of the findings to MARC format development and system functionality is discussed.
    Date
    10. 9.2000 17:38:22
  7. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.00
    0.003544314 = product of:
      0.016540132 = sum of:
        0.0067222426 = weight(_text_:system in 1166) [ClassicSimilarity], result of:
          0.0067222426 = score(doc=1166,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.08699492 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.0036171607 = weight(_text_:information in 1166) [ClassicSimilarity], result of:
          0.0036171607 = score(doc=1166,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.083984874 = fieldWeight in 1166, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.0062007294 = weight(_text_:retrieval in 1166) [ClassicSimilarity], result of:
          0.0062007294 = score(doc=1166,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.08355226 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
      0.21428572 = coord(3/14)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
  8. Eden, B.L.: Metadata and librarianship : will MARC survive? (2004) 0.00
    0.003108885 = product of:
      0.021762194 = sum of:
        0.010128049 = weight(_text_:information in 4750) [ClassicSimilarity], result of:
          0.010128049 = score(doc=4750,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23515764 = fieldWeight in 4750, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4750)
        0.011634145 = product of:
          0.02326829 = sum of:
            0.02326829 = weight(_text_:22 in 4750) [ClassicSimilarity], result of:
              0.02326829 = score(doc=4750,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.2708308 = fieldWeight in 4750, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4750)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Metadata schema and standards are now a part of the information landscape. Librarianship has slowly realized that MARC is only one of a proliferation of metadata standards, and that MARC has many pros and cons related to its age, original conception, and biases. Should librarianship continue to promote the MARC standard? Are there better metadata standards out there that are more robust, user-friendly, and dynamic in the organization and presentation of information? This special issue examines current initiatives that are actively incorporating MARC standards and concepts into new metadata schemata, while also predicting a future where MARC may not be the metadata schema of choice for the organization and description of information.
    Source
    Library hi tech. 22(2004) no.1, S.6-7
  9. Oehlschläger, S.: Umstieg auf MARC21 (2007) 0.00
    0.0029967064 = product of:
      0.041953888 = sum of:
        0.041953888 = product of:
          0.083907776 = sum of:
            0.083907776 = weight(_text_:datenmodell in 555) [ClassicSimilarity], result of:
              0.083907776 = score(doc=555,freq=2.0), product of:
                0.19304088 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.02453417 = queryNorm
                0.43466327 = fieldWeight in 555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=555)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Content
    Kernelement des Umstiegs ist die Entwicklung einer verbindlichen Gesamt-Konkordanz von MAB2 nach MARC 21. Im Jahr 2005 beschäftigte sich die Expertengruppe Datenformate eingehend mit dem Zielformat MARC 21 und dem zugrunde liegenden Datenmodell. Ins Zentrum der Aufmerksamkeit rückten die Analyse und Bewertung von Unterschieden zwischen MAB2 und MARC 21 und die Suche nach Lösungen, damit die vorhandenen Daten möglichst ohne Verlust transportiert werden können. Ein wesentlicher Punkt war dabei das Mapping mehrbändiger Werke, die in MAB2 anders abgebildet werden als in MARC 21. Die Festlegungen der Expertengruppe Datenformate zur Abbildung von mehrbändigen begrenzten Werken in MARC 21 mit Beispielen sind bereits im Dezember auf der Homepage der Deutschen Nationalbibliothek veröffentlicht worden."
  10. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.00
    0.0028433804 = product of:
      0.019903662 = sum of:
        0.008269517 = weight(_text_:information in 2848) [ClassicSimilarity], result of:
          0.008269517 = score(doc=2848,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1920054 = fieldWeight in 2848, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.011634145 = product of:
          0.02326829 = sum of:
            0.02326829 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
              0.02326829 = score(doc=2848,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.2708308 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  11. Avram, H.D.: Machine Readable Cataloging (MARC): 1961-1974 (2009) 0.00
    0.0028433804 = product of:
      0.019903662 = sum of:
        0.008269517 = weight(_text_:information in 3844) [ClassicSimilarity], result of:
          0.008269517 = score(doc=3844,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.1920054 = fieldWeight in 3844, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3844)
        0.011634145 = product of:
          0.02326829 = sum of:
            0.02326829 = weight(_text_:22 in 3844) [ClassicSimilarity], result of:
              0.02326829 = score(doc=3844,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.2708308 = fieldWeight in 3844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3844)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The MARC Program of the Library of Congress, led during its formative years by the author of this entry, was a landmark in the history of automation. Technical procedures, standards, and formatting for the catalog record were experimented with and developed in modern form in this project. The project began when computers were mainframe, slow, and limited in storage. So little was known then about many aspects of automation of library information resources that the MARC project can be seen as a pioneering effort with immeasurable impact.
    Date
    27. 8.2011 14:22:53
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  12. Concise UNIMARC Classification Format : Draft 5 (20000125) (2000) 0.00
    0.0028346193 = product of:
      0.03968467 = sum of:
        0.03968467 = weight(_text_:retrieval in 4421) [ClassicSimilarity], result of:
          0.03968467 = score(doc=4421,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.5347345 = fieldWeight in 4421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=4421)
      0.071428575 = coord(1/14)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  13. Matoria, R.K.; Upadhyay, P.K.: Migration of data from one library management system to another : a case study in India (2004) 0.00
    0.002688897 = product of:
      0.037644558 = sum of:
        0.037644558 = weight(_text_:system in 4200) [ClassicSimilarity], result of:
          0.037644558 = score(doc=4200,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.4871716 = fieldWeight in 4200, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.109375 = fieldNorm(doc=4200)
      0.071428575 = coord(1/14)
    
  14. Tell, B.: On MARC and natural text searching : a review of Pauline Cochrane's inspirational thinking grafted onto a Swedish spy on library matters (2000) 0.00
    0.0024371832 = product of:
      0.017060282 = sum of:
        0.0070881573 = weight(_text_:information in 1183) [ClassicSimilarity], result of:
          0.0070881573 = score(doc=1183,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16457605 = fieldWeight in 1183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1183)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 1183) [ClassicSimilarity], result of:
              0.019944249 = score(doc=1183,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 1183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1183)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    The following discussion is in appreciation of the invaluable inspirations Pauline Cochrane, by her acumen and perspicacity, has implanted into my thinking regarding various applications of library and information science, especially those involving machine-readable records and subject categorization. It is indeed an honor for me at my age to be offered to contribute to Pauline's Festschrift when instead I should be concerned about my forthcoming obituary. In the following, I must give some Background to what formed my thinking before my involvement in the field and thus before I encountered Pauline.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  15. Oehlschläger, S.: Aus der 48. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 12. und 13. November 2004 in Göttingen (2005) 0.00
    0.002213057 = product of:
      0.015491398 = sum of:
        0.0067222426 = weight(_text_:system in 3556) [ClassicSimilarity], result of:
          0.0067222426 = score(doc=3556,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.08699492 = fieldWeight in 3556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
        0.0087691555 = weight(_text_:retrieval in 3556) [ClassicSimilarity], result of:
          0.0087691555 = score(doc=3556,freq=4.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.11816074 = fieldWeight in 3556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
      0.14285715 = coord(2/14)
    
    Content
    Die Deutsche Bibliothek Retrieval von Content In dem Projekt wird angestrebt, Verfahren zu entwickeln und einzuführen, die automatisch und ohne intellektuelle Bearbeitung für das Content-Retrieval ausreichend Sucheinstiege bieten. Dabei kann es sich um die Suche nach Inhalten von Volltexten, digitalen Abbildern, Audiofiles, Videofiles etc. von in Der Deutschen Bibliothek archivierten digitalen Ressourcen oder digitalen Surrogaten archivierter analoger Ressourcen (z. B. OCR-Ergebnisse) handeln. Inhalte, die in elektronischer Form vorhanden sind, aber dem InternetBenutzer Der Deutschen Bibliothek bisher nicht oder nur eingeschränkt zur Verfügung stehen, sollen in möglichst großem Umfang und mit möglichst großem Komfort nutzbar gemacht werden. Darüber hinaus sollen Inhalte benutzt werden, die für ein in ILTIS katalogisiertes Objekt beschreibenden Charakter haben, um auf das beschriebene Objekt zu verweisen. Die höchste Priorität liegt dabei auf der Erschließung von Inhalten in Textformaten. In einem ersten Schritt wurde der Volltext aller Zeitschriften, die im Projekt "Exilpresse digital" digitalisiert wurden, für eine erweiterte Suche genutzt. In einem nächsten Schritt soll die PSI-Software für die Volltextindexierung von Abstracts evaluiert werden. MILOS Mit dem Einsatz von MILOS eröffnet sich die Möglichkeit, nicht oder wenig sachlich erschlossene Bestände automatisch mit ergänzenden Inhaltserschließungsinformationen zu versehen, der Schwerpunkt liegt dabei auf der Freitext-Indexierung. Das bereits in einigen Bibliotheken eingesetzte System, das inzwischen von Der Deutschen Bibliothek für Deutschland lizenziert wurde, wurde in eine UNIX-Version überführt und angepasst. Inzwischen wurde nahezu der gesamte Bestand rückwirkend behandelt, die Daten werden im Gesamt-OPAC für die Recherche zur Verfügung stehen. Die in einer XMLStruktur abgelegten Indexeinträge werden dabei vollständig indexiert und zugänglich gemacht. Ein weiterer Entwicklungsschritt wird in dem Einsatz von MILOS im Online-Verfahren liegen.
  16. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.00
    0.0021406014 = product of:
      0.014984209 = sum of:
        0.0050120843 = weight(_text_:information in 4748) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=4748,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
              0.019944249 = score(doc=4748,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 4748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4748)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  17. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.00
    0.0021406014 = product of:
      0.014984209 = sum of:
        0.0050120843 = weight(_text_:information in 3841) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=3841,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 3841, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3841)
        0.009972124 = product of:
          0.019944249 = sum of:
            0.019944249 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
              0.019944249 = score(doc=3841,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.23214069 = fieldWeight in 3841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3841)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    27. 8.2011 14:22:38
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  18. Kushwoh, S.S.; Gautam, J.N.; Singh, R.: Migration from CDS / ISIS to KOHA : a case study of data conversion from CCF to MARC 21 (2009) 0.00
    0.0019959887 = product of:
      0.027943838 = sum of:
        0.027943838 = weight(_text_:system in 2279) [ClassicSimilarity], result of:
          0.027943838 = score(doc=2279,freq=6.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.36163113 = fieldWeight in 2279, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2279)
      0.071428575 = coord(1/14)
    
    Abstract
    Standards are important for quality and interoperability in any system. Bibliographic record creation standards such as MARC 21 (Machine Readable Catalogue), CCF (Common Communication Format), UNIMARC (Universal MARC) and their local variations, are in practice all across the library community. ILMS (Integrated Library Management Systems) are using these standards for the design of databases and the creation of bibliographic records. Their use is important for uniformity of the system and bibliographic data, but there are problems when a library wants to switch over from one system to another using different standards. This paper discusses migration from one record standard to another, mapping of data and related issues. Data exported from CDS/ISIS CCF based records to KOHA MARC 21 based records are discussed as a case study. This methodology, with few modifications, can be applied for migration of data in other bibliographicformats too. Freeware tools can be utilized for migration.
  19. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.00
    0.0018994523 = product of:
      0.026592331 = sum of:
        0.026592331 = product of:
          0.053184662 = sum of:
            0.053184662 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.053184662 = score(doc=2840,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    Library hi tech. 22(2004) no.1
  20. Block, B.; Hengel, C.; Heuvelmann, R.; Katz, C.; Rusch, B.; Schmidgall, K.; Sigrist, B.: Maschinelles Austauschformat für Bibliotheken und die Functional Requirements for Bibliographic Records : Oder: Wieviel FRBR verträgt MAB? (2005) 0.00
    0.0017980238 = product of:
      0.025172332 = sum of:
        0.025172332 = product of:
          0.050344665 = sum of:
            0.050344665 = weight(_text_:datenmodell in 467) [ClassicSimilarity], result of:
              0.050344665 = score(doc=467,freq=2.0), product of:
                0.19304088 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.02453417 = queryNorm
                0.26079795 = fieldWeight in 467, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=467)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Abstract
    Eine konsequente Umsetzung des FRBR-Modells - schreibt OCLC - würde die größte Veränderung in der Katalogisierung seit hundert Jahren bedeuten. Doch gibt es auch andere Stimmen. So hieß es am Rande eines FRBRWorkshops, der 2004 in Der Deutschen Bibliothek stattfand: Das Verhältnis zwischen den FRBR und der Katalogisierungspraxis sei vergleichbar mit der Beziehung zwischen Fußballkommentatoren und der Fußballmannschaft. Die einen theoretisierten nach Spielende das, was die anderen soeben getan hätten. Was hat es mit den Functional Requirements for Bibliographic Records nun tatsächlich auf sich? Haben vielleicht beide Stimmen Recht? In welcher Beziehung steht das MAB-Format zu dem vorliegenden Modell? Wie lassen sich die Entitäten mit ihren jeweiligen Attributen in MAB abbilden? Bietet MAB die strukturellen Voraussetzungen, um FRBR-Anwendungen zu unterstützen? Das sind die Fragen, die den MAB-Ausschuss, der seit Beginn diesen Jahres als Expertengruppe Datenformate auftritt, beschäftigten und auf die im Folgenden erste Antworten versucht werden. Die Functional Requirements for Bibliographic Records, kurz FRBR, sind eine Empfehlung der International Federation of Library Associations and Institutions (IFLA) von 1998 zur Neustrukturierung von Bibliothekskatalogen. Dabei sind die FRBR ausgelegt als ein logisches Denkmodell für bibliographische Beschreibungen. Es handelt sich ausdrücklich nicht um ein umsetzungsreifes Datenmodell oder gar ein praktisches Regelwerk. Das Modell bleibt auf einer abstrakten Ebene. Beschrieben werden abstrakte Entitäten mit ihren Eigenschaften und Beziehungen zueinander.

Languages

  • e 50
  • d 29

Types

  • a 70
  • el 6
  • s 4
  • x 2
  • b 1
  • m 1
  • n 1
  • More… Less…