Search (16 results, page 1 of 1)

  • × theme_ss:"Datenformate"
  • × year_i:[2010 TO 2020}
  1. Lee, S.; Jacob, E.K.: ¬An integrated approach to metadata interoperability : construction of a conceptual structure between MARC and FRBR (2011) 0.04
    0.044218265 = product of:
      0.08843653 = sum of:
        0.06940313 = weight(_text_:data in 302) [ClassicSimilarity], result of:
          0.06940313 = score(doc=302,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46871632 = fieldWeight in 302, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=302)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 302) [ClassicSimilarity], result of:
              0.038066804 = score(doc=302,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=302)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Machine-Readable Cataloging (MARC) is currently the most broadly used bibliographic standard for encoding and exchanging bibliographic data. However, MARC may not fully support representation of the dynamic nature and semantics of digital resources because of its rigid and single-layered linear structure. The Functional Requirements for Bibliographic Records (FRBR) model, which is designed to overcome the problems of MARC, does not provide sufficient data elements and adopts a predetermined hierarchy. A flexible structure for bibliographic data with detailed data elements is needed. Integrating MARC format with the hierarchical structure of FRBR is one approach to meet this need. The purpose of this research is to propose an approach that can facilitate interoperability between MARC and FRBR by providing a conceptual structure that can function as a mediator between MARC data elements and FRBR attributes.
    Date
    10. 9.2000 17:38:22
  2. Stephens, O.: Introduction to OpenRefine (2014) 0.03
    0.029033413 = product of:
      0.11613365 = sum of:
        0.11613365 = weight(_text_:data in 2884) [ClassicSimilarity], result of:
          0.11613365 = score(doc=2884,freq=28.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.7843124 = fieldWeight in 2884, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2884)
      0.25 = coord(1/4)
    
    Abstract
    OpenRefine is described as a tool for working with 'messy' data - but what does this mean? It is probably easiest to describe the kinds of data OpenRefine is good at working with and the sorts of problems it can help you solve. OpenRefine is most useful where you have data in a simple tabular format but with internal inconsistencies either in data formats, or where data appears, or in terminology used. It can help you: Get an overview of a data set Resolve inconsistencies in a data set Help you split data up into more granular parts Match local data up to other data sets Enhance a data set with data from other sources Some common scenarios might be: 1. Where you want to know how many times a particular value appears in a column in your data. 2. Where you want to know how values are distributed across your whole data set. 3. Where you have a list of dates which are formatted in different ways, and want to change all the dates in the list to a single common date format.
  3. Zapounidou, S.; Sfakakis, M.; Papatheodorou, C.: Library data integration : towards BIBFRAME mapping to EDM (2014) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 1589) [ClassicSimilarity], result of:
          0.08778879 = score(doc=1589,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 1589, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1589)
      0.25 = coord(1/4)
    
    Abstract
    Integration of library data into the Linked Data environment is a key issue in libraries and is approached on the basis of interoperability between library data conceptual models. Achieving interoperability for different representations of the same or related entities between the library and cultural heritage domains shall enhance rich bibliographic data reusability and support the development of new data-driven information services. This paper aims to contribute to the desired interoperability by attempting to map core semantic paths between the BIBFRAME and EDM conceptual models. BIBFRAME is developed by the Library of Congress to support transformation of legacy library data in MARC format into linked data. EDM is the model developed for and used in the Europeana Cultural Heritage aggregation portal.
  4. Xu, A.; Hess, K.; Akerman, L.: From MARC to BIBFRAME 2.0 : Crosswalks (2018) 0.02
    0.01828933 = product of:
      0.07315732 = sum of:
        0.07315732 = weight(_text_:data in 5172) [ClassicSimilarity], result of:
          0.07315732 = score(doc=5172,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.49407038 = fieldWeight in 5172, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5172)
      0.25 = coord(1/4)
    
    Abstract
    One of the big challenges facing academic libraries today is to increase the relevance of the libraries to their user communities. If the libraries can increase the visibility of their resources on the open web, it will increase the chances of the libraries to reach to their user communities via the user's first search experience. BIBFRAME and library Linked Data will enable libraries to publish their resources in a way that the Web understands, consume Linked Data to enrich their resources relevant to the libraries' user communities, and visualize networks across collections. However, one of the important steps for transitioning to BIBFRAME and library Linked Data involves crosswalks, mapping MARC fields and subfields across data models and performing necessary data reformatting to be in compliance with the specifications of the new model, which is currently BIBFRAME 2.0. This article looks into how the Library of Congress has mapped library bibliographic data from the MARC format to the BIBFRAME 2.0 model and vocabulary published and updated since April 2016, available from http://www.loc.gov/bibframe/docs/index.html based on the recently released conversion specifications and converter, developed by the Library of Congress with input from many community members. The BIBFRAME 2.0 standard and conversion tools will enable libraries to transform bibliographic data from MARC into BIBFRAME 2.0, which introduces a Linked Data model as the improved method of bibliographic control for the future, and make bibliographic information more useful within and beyond library communities.
  5. Galvão, R.M.: UNIMARC format relevance : maintenance or replacement? (2018) 0.02
    0.018105512 = product of:
      0.07242205 = sum of:
        0.07242205 = weight(_text_:data in 5163) [ClassicSimilarity], result of:
          0.07242205 = score(doc=5163,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48910472 = fieldWeight in 5163, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5163)
      0.25 = coord(1/4)
    
    Abstract
    This article presents an empirical study focused on a qualitative analysis of the UNIMARC format. An analysis of the structural quality of the data provided by the format is evaluated to determine its current suitability for meeting the requirements and trends in data architecture for the information network and the Semantic Web. Driven by a set of quality characteristics that identify weaknesses in the data schema that cannot be bridged by simply converting data to MARC XML or RDF/XML, we conclude that the UNIMARC format is not compliant with the current metadata schema desiderata and must be replaced.
  6. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.02
    0.017350782 = product of:
      0.06940313 = sum of:
        0.06940313 = weight(_text_:data in 3967) [ClassicSimilarity], result of:
          0.06940313 = score(doc=3967,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46871632 = fieldWeight in 3967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3967)
      0.25 = coord(1/4)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.
  7. Miller, E.; Ogbuji, U.: Linked data design for the visible library (2015) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 2773) [ClassicSimilarity], result of:
          0.062076043 = score(doc=2773,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 2773, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2773)
      0.25 = coord(1/4)
    
    Abstract
    In response to libraries' frustration over their rich resources being invisible on the web, Zepheira, at the request of the Library of Congress, created BIBFRAME, a bibliographic metadata framework for cataloging. The model replaces MARC records with linked data, promoting resource visibility through a rich network of links. In place of formal taxonomies, a small but extensible vocabulary streamlines metadata efforts. Rather than using a unique bibliographic record to describe one item, BIBFRAME draws on the Dublin Core and the Functional Requirements for Bibliographic Records (FRBR) to generate formalized descriptions of Work, Instance, Authority and Annotation as well as associations between items. Zepheira trains librarians to transform MARC records to BIBFRAME resources and adapt the vocabulary for specialized needs, while subject matter experts and technical experts manage content, site design and usability. With a different approach toward data modeling and metadata, previously invisible resources gain visibility through linking.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
  8. Suominen, O.; Hyvönen, N.: From MARC silos to Linked Data silos? (2017) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 3732) [ClassicSimilarity], result of:
          0.053759433 = score(doc=3732,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 3732, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3732)
      0.25 = coord(1/4)
    
    Abstract
    Seit einiger Zeit stellen Bibliotheken ihre bibliografischen Metadadaten verstärkt offen in Form von Linked Data zur Verfügung. Dabei kommen jedoch ganz unterschiedliche Modelle für die Strukturierung der bibliografischen Daten zur Anwendung. Manche Bibliotheken verwenden ein auf FRBR basierendes Modell mit mehreren Schichten von Entitäten, während andere flache, am Datensatz orientierte Modelle nutzen. Der Wildwuchs bei den Datenmodellen erschwert die Nachnutzung der bibliografischen Daten. Im Ergebnis haben die Bibliotheken die früheren MARC-Silos nur mit zueinander inkompatiblen Linked-Data-Silos vertauscht. Deshalb ist es häufig schwierig, Datensets miteinander zu kombinieren und nachzunutzen. Kleinere Unterschiede in der Datenmodellierung lassen sich zwar durch Schema Mappings in den Griff bekommen, doch erscheint es fraglich, ob die Interoperabilität insgesamt zugenommen hat. Der Beitrag stellt die Ergebnisse einer Studie zu verschiedenen veröffentlichten Sets von bibliografischen Daten vor. Dabei werden auch die unterschiedlichen Modelle betrachtet, um bibliografische Daten als RDF darzustellen, sowie Werkzeuge zur Erzeugung von entsprechenden Daten aus dem MARC-Format. Abschließend wird der von der Finnischen Nationalbibliothek verfolgte Ansatz behandelt.
  9. Beall, J.; Mitchell, J.S.: History of the representation of the DDC in the MARC Classification Format (2010) 0.01
    0.011199882 = product of:
      0.04479953 = sum of:
        0.04479953 = weight(_text_:data in 3568) [ClassicSimilarity], result of:
          0.04479953 = score(doc=3568,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.30255508 = fieldWeight in 3568, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3568)
      0.25 = coord(1/4)
    
    Abstract
    This article explores the history of the representation of the Dewey Decimal Classification (DDC) in the Machine Readable Cataloging (MARC) formats, with a special emphasis on the development of the MARC classification format. Until 2009, the format used to represent the DDC has been a proprietary one that predated the development of the MARC classification format. The need to replace the current editorial support system, the desire to deliver DDC data in a variety of formats to support different uses, and the increasingly global context of editorial work with translation partners around the world prompted the Dewey editorial team, along with OCLC research and development colleagues, to rethink the underlying representation of the DDC and choose the MARC 21 formats for classification and authority data. The discussion is framed with quotes from the writings of Nancy J. Williamson, whose analysis of the content of the Library of Congress Classification (LCC) schedules played a key role in shaping the original MARC classification format.
    Object
    MARC for Classification Data
  10. Boehr, D.L.; Bushman, B.: Preparing for the future : National Library of Medicine's® project to add MeSH® RDF URIs to its bibliographic and authority records (2018) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 5173) [ClassicSimilarity], result of:
          0.043894395 = score(doc=5173,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 5173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5173)
      0.25 = coord(1/4)
    
    Abstract
    Although it is not yet known for certain what will replace MARC, eventually bibliographic data will need to be transformed to move into a linked data environment. This article discusses why the National Library of Medicine chose to add Uniform Resource Identifiers for Medical Subject Headings as our starting point and details the process by which they were added to the MeSH MARC authority records, the legacy bibliographic records, and the records for newly cataloged items. The article outlines the various enhancement methods available, decisions made, and the rationale for the selected method.
  11. Chen, Y.-N.: ¬An FRBR-based approach for transforming MARC records into linked data (2018) 0.01
    0.0103460075 = product of:
      0.04138403 = sum of:
        0.04138403 = weight(_text_:data in 4799) [ClassicSimilarity], result of:
          0.04138403 = score(doc=4799,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=4799)
      0.25 = coord(1/4)
    
  12. BIBFRAME Model Overview (2013) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 763) [ClassicSimilarity], result of:
          0.03657866 = score(doc=763,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=763)
      0.25 = coord(1/4)
    
    Abstract
    The Bibliographic Framework Transition Initiative is an undertaking by the Library of Congress and the community to better accommodate future needs of the library community. A major focus of the initiative will be to determine a transition path for the MARC 21 exchange format to more Web based, Linked Data standards. Zepheira and The Library of Congress are working together to develop a Linked Data model, vocabulary and enabling tools / services for supporting this Initiative. BIBFRAME.ORG is a central hub for this effort.
  13. Aalberg, T.; Zumer, M.: ¬The value of MARC data, or, challenges of frbrisation (2013) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 1769) [ClassicSimilarity], result of:
          0.03657866 = score(doc=1769,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 1769, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1769)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Bibliographic records should now be used in innovative end-user applications that enable users to learn about, discover and exploit available content, and this information should be interpreted and reused also beyond the library domain. New conceptual models such as FRBR offer the foundation for such developments. The main motivation for this research is to contribute to the adoption of the FRBR model in future bibliographic standards and systems, by analysing limitations in existing bibliographic information and looking for short- and long-term solutions that can improve the data quality in terms of expressing the FRBR model. Design/methodology/approach - MARC records in three collections (BIBSYS catalogue, Slovenian National Bibliography and BTJ catalogue) were first analysed by looking at statistics of field and subfield usage to determine common patterns that express FRBR. Based on this, different rules for interpreting the information were developed. Finally typical problems/errors found in MARC records were analysed. Findings - Different types of FRBR entity-relationship structures that typically can be found in bibliographic records are identified. Problems related to interpreting these from bibliographic records are analyzed. Frbrisation of consistent and complete MARC records is relatively successful, particularly if all entities are systematically described and relationships among them are clearly indicated. Research limitations/implications - Advanced matching was not used for clustering of identical entities. Practical implications - Cataloguing guidelines are proposed to enable better frbrisation of MARC records in the interim period, before new formats are developed and implemented. Originality/value - This is the first in depth analysis of manifestations embodying several expressions and of works and agents as subjects.
  14. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 133) [ClassicSimilarity], result of:
          0.02586502 = score(doc=133,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
  15. Boiger, W.: Entwicklung und Implementierung eines MARC21-MARCXML-Konverters in der Programmiersprache Perl (2015) 0.01
    0.006466255 = product of:
      0.02586502 = sum of:
        0.02586502 = weight(_text_:data in 2466) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2466,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2466)
      0.25 = coord(1/4)
    
    Abstract
    Aktuell befinden sich im Datenbestand des gemeinsamen Katalogs des Bibliotheksverbundes Bayern und des Kooperativen Bibliotheksverbundes Berlin-Brandenburg (B3Kat) etwa 25,6 Millionen Titeldatensätze. Die Bayerische Verbundzentrale veröffentlicht diese Daten seit 2011 im Zuge der bayerischen Open-Data-Initiative auf ihrer Webpräsenz. Zu den Nachnutzern dieser Daten gehören die Deutsche Digitale Bibliothek und das Projekt Culturegraph der DNB. Die Daten werden im weitverbreiteten Katalogdatenformat MARCXML publiziert. Zur Erzeugung der XML-Dateien verwendete die Verbundzentrale bis 2014 die Windows-Software MarcEdit. Anfang 2015 entwickelte der Verfasser im Rahmen der bayerischen Referendarsausbildung einen einfachen MARC-21-MARCXML-Konverter in Perl, der die Konvertierung wesentlich erleichert und den Einsatz von MarcEdit in der Verbundzentrale überflüssig macht. In der vorliegenden Arbeit, die zusammen mit dem Konverter verfasst wurde, wird zunächst die Notwendigkeit einer Perl-Implementierung motiviert. Im Anschluss werden die bibliographischen Datenformate MARC 21 und MARCXML beleuchtet und für die Konvertierung wesentliche Eigenschaften erläutert. Zum Schluss wird der Aufbau des Konverters im Detail beschrieben. Die Perl-Implementierung selbst ist Teil der Arbeit. Verwendung, Verbreitung und Veränderung der Software sind unter den Bedingungen der GNU Affero General Public License gestattet, entweder gemäß Version 3 der Lizenz oder (nach Ihrer Option) jeder späteren Version.[Sie finden die Datei mit der Perl-Implementierung in der rechten Spalte in der Kategorie Artikelwerkzeuge unter dem Punkt Zusatzdateien.]
  16. Aslanidi, M.; Papadakis, I.; Stefanidakis, M.: Name and title authorities in the music domain : alignment of UNIMARC authorities format with RDA (2018) 0.01
    0.0055514094 = product of:
      0.022205638 = sum of:
        0.022205638 = product of:
          0.044411276 = sum of:
            0.044411276 = weight(_text_:22 in 5178) [ClassicSimilarity], result of:
              0.044411276 = score(doc=5178,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.2708308 = fieldWeight in 5178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5178)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    19. 3.2019 12:17:22