Search (25 results, page 1 of 2)

  • × theme_ss:"Datenformate"
  • × year_i:[2010 TO 2020}
  1. Suominen, O.; Hyvönen, N.: From MARC silos to Linked Data silos? (2017) 0.03
    0.026028758 = product of:
      0.052057516 = sum of:
        0.05064729 = weight(_text_:von in 3732) [ClassicSimilarity], result of:
          0.05064729 = score(doc=3732,freq=10.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.39547473 = fieldWeight in 3732, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.046875 = fieldNorm(doc=3732)
        0.001410227 = product of:
          0.004230681 = sum of:
            0.004230681 = weight(_text_:a in 3732) [ClassicSimilarity], result of:
              0.004230681 = score(doc=3732,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.07643694 = fieldWeight in 3732, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3732)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Seit einiger Zeit stellen Bibliotheken ihre bibliografischen Metadadaten verstärkt offen in Form von Linked Data zur Verfügung. Dabei kommen jedoch ganz unterschiedliche Modelle für die Strukturierung der bibliografischen Daten zur Anwendung. Manche Bibliotheken verwenden ein auf FRBR basierendes Modell mit mehreren Schichten von Entitäten, während andere flache, am Datensatz orientierte Modelle nutzen. Der Wildwuchs bei den Datenmodellen erschwert die Nachnutzung der bibliografischen Daten. Im Ergebnis haben die Bibliotheken die früheren MARC-Silos nur mit zueinander inkompatiblen Linked-Data-Silos vertauscht. Deshalb ist es häufig schwierig, Datensets miteinander zu kombinieren und nachzunutzen. Kleinere Unterschiede in der Datenmodellierung lassen sich zwar durch Schema Mappings in den Griff bekommen, doch erscheint es fraglich, ob die Interoperabilität insgesamt zugenommen hat. Der Beitrag stellt die Ergebnisse einer Studie zu verschiedenen veröffentlichten Sets von bibliografischen Daten vor. Dabei werden auch die unterschiedlichen Modelle betrachtet, um bibliografische Daten als RDF darzustellen, sowie Werkzeuge zur Erzeugung von entsprechenden Daten aus dem MARC-Format. Abschließend wird der von der Finnischen Nationalbibliothek verfolgte Ansatz behandelt.
    Type
    a
  2. Mensing, P.: Planung und Durchführung von Digitalisierungsprojekten am Beispiel nicht-textueller Materialien (2010) 0.02
    0.023296196 = product of:
      0.046592392 = sum of:
        0.04576976 = weight(_text_:von in 3577) [ClassicSimilarity], result of:
          0.04576976 = score(doc=3577,freq=24.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.357389 = fieldWeight in 3577, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3577)
        8.2263234E-4 = product of:
          0.002467897 = sum of:
            0.002467897 = weight(_text_:a in 3577) [ClassicSimilarity], result of:
              0.002467897 = score(doc=3577,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.044588212 = fieldWeight in 3577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3577)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Im Jahr 2007 hat die GWLB gemeinsam mit der HAAB, der Universitätsbibliothek Johann Christian Senckenberg und mehreren Stiftungen die Königliche Gartenbibliothek Herrenhausen erworben. Neben den textuellen Materialien enthält die Bibliothek auch viele nicht-textuelle Materialien wie Herbarien, Zeichnungen und auch Gouachen mit Abbildungen von Obstsorten aus der ehemaligen Obstbaumplantage. Diese Gouachen in Mappen liegend bei der GWLB aufbewahrt. Die einzelnen Blätter tragen keine Titel, sondern sind am unteren Rand mit Bleistift durchnummeriert. Ohne die beigefügte ebenfalls durchnummerierte Liste mit Sortennamen ist keine eindeutige Zuordnung und die Nutzung somit nur eingeschränkt möglich. Die Überlegung zu einer digitalen Präsentation liegt daher nahe, denn "der ungehinderte Zugang zu elektronischen wissenschaftlich relevanten Publikationen von jedem Ort aus und zu jeder Zeit spielt in der digitalen Informationsgesellschaft eine immer bedeutendere Rolle." Oder um es drastischer zu formulieren: "Was nicht im Web ist, ist nicht in der Welt". Bevor jedoch mit der Digitalisierung begonnen werden kann, müssen einige Fragen vorab geklärt werden. Im Folgenden werden Kriterien, die bei der Planung und Durchführung von Digitalisierungsprojekten zu beachten sind, behandelt.
    Content
    Darin auch: "2.7 Erschließung der Digitalisate Die formale Erschließung von gedruckten Beständen wird in Deutschland nach RAK-WB bzw. RAK-OB durchgeführt. Im Gegensatz zu Druckwerken, die meist alle wichtigen Informationen selbst enthalten (Impressum), sind in oder an Kunstwerken und Bildern meist keine Angaben wie Autor, Künstler oder Entstehungsjahr zu finden. Für die Formalerfassung von Nichtbuchmaterialien sind in Deutschland die "Regeln für die alphabetische Katalogisierung von Nichtbuchmaterialien" anzuwenden (RAK-NBM), eine Erweiterung der o.g. RAK. Zur Erschließung von Kunstwerken wurde seit den 70er Jahren des 20. Jhds. die Marburger-Index-Datenbank (MIDAS) entwickelt, die auf dem AKL, ICONCLASS und auch RAK aufbaut. MIDAS findet hauptsächlich in Museen Anwendung, konnte sich aber aufgrund der nicht verbindlichen Nutzung nicht durchsetzen. Ebenfalls aus dem Museumsbereich stammt CIDOC CRM, das seit 2006 ISO-zertifiziert ist (ISO 21127:2006) und der Datenfeldkatalog zur Grundinventarisation. Um die inhaltliche Erschließung von Bibliotheksbeständen einheitlich gestalten zu können, wurde die Schlagwortnormdatei entwickelt. Diese Datei ist universell ausgerichtet und ist daher für Spezialgebiete nicht tief genug ausgearbeitet. Im kunsthistorischen Bereich sind daher außerdem u.a. der AA und der AGM von Bedeutung. Als Klassifizierungssystem steht ICONCLASS zur Verfügung. Bei der inhaltlichen Erschließung ist darauf zu achten, dass irrelevante Informationen nicht zur unnötigen Vergrößerung des Kataloges führen. Um durchgängig eine größtmögliche Nutzerorientierung bieten zu können, sollten die gewünschten Prioritäten der Erschließung in einer Richtlinie festgehalten werden. Zur Interpretation von Bildern wurde von Panofsky ein 3-Stufen-Modell entwickelt, dass sich in prä- oder vor-ikonografische, ikonografische Beschreibung und ikonologische Interpretation unterteilen lässt. In der ersten Stufe werden nur die dargestellten Dinge oder Personen skizziert, ohne ihre Bedeutung zueinander zu interpretieren. Dies erfolgt erst in der zweiten Stufe. Hier wird das Ahema des Kunstwerkes allerdings ohne weitere Deutung benannt. In der dritten Stufe wird schließlich geklärt, warum dieses Werk so geschaffen wurde und nicht anders.
    Type
    a
  3. Barckow, A.: Bücherhallen Hamburg stellten auf MARC21 um : ambitioniertes Projekt realisiert / Einführung von GND und RDA bereits in Arbeit (2012) 0.02
    0.020537097 = product of:
      0.041074194 = sum of:
        0.03775026 = weight(_text_:von in 148) [ClassicSimilarity], result of:
          0.03775026 = score(doc=148,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=148)
        0.0033239368 = product of:
          0.0099718105 = sum of:
            0.0099718105 = weight(_text_:a in 148) [ClassicSimilarity], result of:
              0.0099718105 = score(doc=148,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.18016359 = fieldWeight in 148, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=148)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Type
    a
  4. Springer: Neues Online-Tool zum Herunterladen (2011) 0.02
    0.019508056 = product of:
      0.039016113 = sum of:
        0.03737085 = weight(_text_:von in 4716) [ClassicSimilarity], result of:
          0.03737085 = score(doc=4716,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29180688 = fieldWeight in 4716, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4716)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 4716) [ClassicSimilarity], result of:
              0.004935794 = score(doc=4716,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 4716, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4716)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    "Mit nur wenigen Klicks können die einzelnen Listen der fachgebietsbezogenen Springer eBook-Sammlungen mit den gewünschten bibliografischen Angaben heruntergeladen werden. Das neue Online-Tool ist unter www.springer.com/marc zu finden. Neben den MARC Records können die Bibliothekare die komplette oder eine individuell zusammengestellte Liste zu den Springer-eBooks auswählen. Jeder Eintrag einer solchen Liste enthält die wesentlichen bibliografischen Angaben sowie die URL zu dem entsprechenden Buch, das sich auf der Plattform von SpringerLink befindet. Springer ist der größte eBookVerlag im wissenschaftlichen STM-Bereich. Die Inhalte von Büchern und Zeitschriften werden den Nutzern über SpringerLink zur Verfügung gestellt."
    Type
    a
  5. Boiger, W.: Entwicklung und Implementierung eines MARC21-MARCXML-Konverters in der Programmiersprache Perl (2015) 0.01
    0.010025159 = product of:
      0.020050319 = sum of:
        0.01887513 = weight(_text_:von in 2466) [ClassicSimilarity], result of:
          0.01887513 = score(doc=2466,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.14738473 = fieldWeight in 2466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2466)
        0.001175189 = product of:
          0.003525567 = sum of:
            0.003525567 = weight(_text_:a in 2466) [ClassicSimilarity], result of:
              0.003525567 = score(doc=2466,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.06369744 = fieldWeight in 2466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2466)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Aktuell befinden sich im Datenbestand des gemeinsamen Katalogs des Bibliotheksverbundes Bayern und des Kooperativen Bibliotheksverbundes Berlin-Brandenburg (B3Kat) etwa 25,6 Millionen Titeldatensätze. Die Bayerische Verbundzentrale veröffentlicht diese Daten seit 2011 im Zuge der bayerischen Open-Data-Initiative auf ihrer Webpräsenz. Zu den Nachnutzern dieser Daten gehören die Deutsche Digitale Bibliothek und das Projekt Culturegraph der DNB. Die Daten werden im weitverbreiteten Katalogdatenformat MARCXML publiziert. Zur Erzeugung der XML-Dateien verwendete die Verbundzentrale bis 2014 die Windows-Software MarcEdit. Anfang 2015 entwickelte der Verfasser im Rahmen der bayerischen Referendarsausbildung einen einfachen MARC-21-MARCXML-Konverter in Perl, der die Konvertierung wesentlich erleichert und den Einsatz von MarcEdit in der Verbundzentrale überflüssig macht. In der vorliegenden Arbeit, die zusammen mit dem Konverter verfasst wurde, wird zunächst die Notwendigkeit einer Perl-Implementierung motiviert. Im Anschluss werden die bibliographischen Datenformate MARC 21 und MARCXML beleuchtet und für die Konvertierung wesentliche Eigenschaften erläutert. Zum Schluss wird der Aufbau des Konverters im Detail beschrieben. Die Perl-Implementierung selbst ist Teil der Arbeit. Verwendung, Verbreitung und Veränderung der Software sind unter den Bedingungen der GNU Affero General Public License gestattet, entweder gemäß Version 3 der Lizenz oder (nach Ihrer Option) jeder späteren Version.[Sie finden die Datei mit der Perl-Implementierung in der rechten Spalte in der Kategorie Artikelwerkzeuge unter dem Punkt Zusatzdateien.]
    Type
    a
  6. Schaffner, V.: FRBR in MAB2 und Primo - ein kafkaesker Prozess? : Möglichkeiten der FRBRisierung von MAB2-Datensätzen in Primo exemplarisch dargestellt an Datensätzen zu Franz Kafkas "Der Process" (2011) 0.01
    0.009246888 = product of:
      0.03698755 = sum of:
        0.03698755 = weight(_text_:von in 907) [ClassicSimilarity], result of:
          0.03698755 = score(doc=907,freq=12.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.28881392 = fieldWeight in 907, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.03125 = fieldNorm(doc=907)
      0.25 = coord(1/4)
    
    Abstract
    FRBR (Functional Requirements for Bibliographic Records) ist ein logisches Denkmodell für bibliographische Datensätze, welches zur benutzerfreundlicheren Gestaltung von Browsing in Online-Bibliothekskatalogen herangezogen werden kann. Im Österreichischen Bibliothekenverbund (OBV) werden bibliographische Datensätze nach den Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (RAK-WB) erstellt und liegen im Datenformat MAB2 (Maschinelles Austauschformat für Bibliotheken) vor. Mit der Software Primo von Ex Libris, die 2009 implementiert wurde, besteht die Möglichkeit bibliographische Datensätze für die Anzeige neu aufzubereiten. Wie ausgehend von MAB2-Daten eine möglichst FRBR-konforme Datenpräsentation in Primo geleistet werden kann und welche Probleme sich dabei ergeben, ist die zentrale Fragestellung dieser Master Thesis. Exemplarisch dargestellt wird dies anhand von Datensätzen des Österreichischen Bibliothekenverbundes zu Franz Kafkas "Der Process". Im Fokus stehen drei Aspekte, welche im Zusammenhang mit FRBR, MAB2 und Primo als besonders problematisch und diskussionswürdig erscheinen: das Konzept des "Werkes", Expressionen als praxistaugliche Entitäten und Aggregate bzw. mehrbändig begrenzte Werke. Nach einer Einführung in das FRBR-Modell wird versucht einen idealen FRBRBaum zu Kafkas "Der Process" in seinen unterschiedlichen Ausprägungen (Übersetzungen, Verfilmungen, Textvarianten, Aggregate etc.) darzustellen: Schon hier werden erste Grenzen des Modells sichtbar. Daran anschließend werden Datensätze des OBV einer Analyse unterzogen, um die FRBRTauglichkeit von MAB2 und die Möglichkeit der FRBR keys in Primo zu beleuchten. Folgende Einschränkungen wurden deutlich: Die derzeitige Herangehensweise und Praxis der Formalerschließung ist nicht auf FRBR vorbereitet. Die vorliegenden Metadaten sind zu inkonsistent, um ein maschinelles Extrahieren für eine FRBR-konforme Datenpräsentation zu ermöglichen. Die Möglichkeiten des Werkclusterings und der Facettierung in Primo bieten darüber hinaus zwar einen Mehrwert für das Browsing in Trefferlisten, jedoch nur bedingt im FRBR-Sinne.
  7. Aslanidi, M.; Papadakis, I.; Stefanidakis, M.: Name and title authorities in the music domain : alignment of UNIMARC authorities format with RDA (2018) 0.01
    0.008750932 = product of:
      0.03500373 = sum of:
        0.03500373 = product of:
          0.052505594 = sum of:
            0.0069802674 = weight(_text_:a in 5178) [ClassicSimilarity], result of:
              0.0069802674 = score(doc=5178,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12611452 = fieldWeight in 5178, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5178)
            0.045525327 = weight(_text_:22 in 5178) [ClassicSimilarity], result of:
              0.045525327 = score(doc=5178,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.2708308 = fieldWeight in 5178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5178)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    This article discusses and highlights alignment issues that arise between UNIMARC Authorities Format and Resource Description and Access (RDA) regarding the creation of name and title authorities for musical works and creators. More specifically, RDA, as an implementation of the FRAD model, is compared with the UNIMARC Authorities Format (Updates 2012 and 2016) in an effort to highlight various cases where the discovery of equivalent fields between the two standards is not obvious. The study is envisioned as a first step in an ongoing process of working with the UNIMARC community throughout RDA's advancement and progression regarding the entities [musical] Work and Names.
    Date
    19. 3.2019 12:17:22
    Type
    a
  8. Lee, S.; Jacob, E.K.: ¬An integrated approach to metadata interoperability : construction of a conceptual structure between MARC and FRBR (2011) 0.01
    0.008230787 = product of:
      0.032923147 = sum of:
        0.032923147 = product of:
          0.049384717 = sum of:
            0.010363008 = weight(_text_:a in 302) [ClassicSimilarity], result of:
              0.010363008 = score(doc=302,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.18723148 = fieldWeight in 302, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=302)
            0.039021708 = weight(_text_:22 in 302) [ClassicSimilarity], result of:
              0.039021708 = score(doc=302,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.23214069 = fieldWeight in 302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=302)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Machine-Readable Cataloging (MARC) is currently the most broadly used bibliographic standard for encoding and exchanging bibliographic data. However, MARC may not fully support representation of the dynamic nature and semantics of digital resources because of its rigid and single-layered linear structure. The Functional Requirements for Bibliographic Records (FRBR) model, which is designed to overcome the problems of MARC, does not provide sufficient data elements and adopts a predetermined hierarchy. A flexible structure for bibliographic data with detailed data elements is needed. Integrating MARC format with the hierarchical structure of FRBR is one approach to meet this need. The purpose of this research is to propose an approach that can facilitate interoperability between MARC and FRBR by providing a conceptual structure that can function as a mediator between MARC data elements and FRBR attributes.
    Date
    10. 9.2000 17:38:22
    Type
    a
  9. Stephens, O.: Introduction to OpenRefine (2014) 0.00
    0.0010576702 = product of:
      0.004230681 = sum of:
        0.004230681 = product of:
          0.012692042 = sum of:
            0.012692042 = weight(_text_:a in 2884) [ClassicSimilarity], result of:
              0.012692042 = score(doc=2884,freq=18.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22931081 = fieldWeight in 2884, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2884)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    OpenRefine is described as a tool for working with 'messy' data - but what does this mean? It is probably easiest to describe the kinds of data OpenRefine is good at working with and the sorts of problems it can help you solve. OpenRefine is most useful where you have data in a simple tabular format but with internal inconsistencies either in data formats, or where data appears, or in terminology used. It can help you: Get an overview of a data set Resolve inconsistencies in a data set Help you split data up into more granular parts Match local data up to other data sets Enhance a data set with data from other sources Some common scenarios might be: 1. Where you want to know how many times a particular value appears in a column in your data. 2. Where you want to know how values are distributed across your whole data set. 3. Where you have a list of dates which are formatted in different ways, and want to change all the dates in the list to a single common date format.
  10. Tell, B.: On MARC and natural text searching : a review of Pauline Cochrane's Thinking grafted onto a Swedish spy on library matters (2016) 0.00
    0.0010511212 = product of:
      0.0042044846 = sum of:
        0.0042044846 = product of:
          0.012613453 = sum of:
            0.012613453 = weight(_text_:a in 2698) [ClassicSimilarity], result of:
              0.012613453 = score(doc=2698,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22789092 = fieldWeight in 2698, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2698)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Content
    Vgl.: Tell, B.: On MARC and natural text searching: a review of Pauline Cochrane's inspirational thinking grafted onto a Swedish spy on library matters. In: Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler. Urbana-Champaign, IL: Illinois University at Urbana-Champaign, Graduate School of Library and Information Science 2000. S.46-58. Vgl.: DOI: 10.1080/01639374.2015.1116359.
    Type
    a
  11. Bernstein, S.: MARC reborn : migrating MARC fixed field metadata into the variable fields (2016) 0.00
    9.401512E-4 = product of:
      0.003760605 = sum of:
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 2631) [ClassicSimilarity], result of:
              0.011281814 = score(doc=2631,freq=8.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 2631, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2631)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Despite calls over the past decade and a half for MARC to be replaced with an encoding standard that is more in keeping with current metadata practices, the current standard has evolved in such a way as to render many of the arguments by those who call for its demise moot. A further revision to the standard is proposed to address the one remaining problem with MARC so as to allow it to better serve the needs of information seekers for years to come.
    Type
    a
  12. Miller, E.; Ogbuji, U.: Linked data design for the visible library (2015) 0.00
    9.327775E-4 = product of:
      0.00373111 = sum of:
        0.00373111 = product of:
          0.0111933295 = sum of:
            0.0111933295 = weight(_text_:a in 2773) [ClassicSimilarity], result of:
              0.0111933295 = score(doc=2773,freq=14.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20223314 = fieldWeight in 2773, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2773)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    In response to libraries' frustration over their rich resources being invisible on the web, Zepheira, at the request of the Library of Congress, created BIBFRAME, a bibliographic metadata framework for cataloging. The model replaces MARC records with linked data, promoting resource visibility through a rich network of links. In place of formal taxonomies, a small but extensible vocabulary streamlines metadata efforts. Rather than using a unique bibliographic record to describe one item, BIBFRAME draws on the Dublin Core and the Functional Requirements for Bibliographic Records (FRBR) to generate formalized descriptions of Work, Instance, Authority and Annotation as well as associations between items. Zepheira trains librarians to transform MARC records to BIBFRAME resources and adapt the vocabulary for specialized needs, while subject matter experts and technical experts manage content, site design and usability. With a different approach toward data modeling and metadata, previously invisible resources gain visibility through linking.
    Footnote
    Contribution to a special section "Linked data and the charm of weak semantics".
    Type
    a
  13. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.00
    9.2906854E-4 = product of:
      0.0037162742 = sum of:
        0.0037162742 = product of:
          0.0111488225 = sum of:
            0.0111488225 = weight(_text_:a in 133) [ClassicSimilarity], result of:
              0.0111488225 = score(doc=133,freq=20.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20142901 = fieldWeight in 133, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=133)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
    Type
    a
  14. Beall, J.; Mitchell, J.S.: History of the representation of the DDC in the MARC Classification Format (2010) 0.00
    7.196534E-4 = product of:
      0.0028786135 = sum of:
        0.0028786135 = product of:
          0.00863584 = sum of:
            0.00863584 = weight(_text_:a in 3568) [ClassicSimilarity], result of:
              0.00863584 = score(doc=3568,freq=12.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15602624 = fieldWeight in 3568, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3568)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article explores the history of the representation of the Dewey Decimal Classification (DDC) in the Machine Readable Cataloging (MARC) formats, with a special emphasis on the development of the MARC classification format. Until 2009, the format used to represent the DDC has been a proprietary one that predated the development of the MARC classification format. The need to replace the current editorial support system, the desire to deliver DDC data in a variety of formats to support different uses, and the increasingly global context of editorial work with translation partners around the world prompted the Dewey editorial team, along with OCLC research and development colleagues, to rethink the underlying representation of the DDC and choose the MARC 21 formats for classification and authority data. The discussion is framed with quotes from the writings of Nancy J. Williamson, whose analysis of the content of the Library of Congress Classification (LCC) schedules played a key role in shaping the original MARC classification format.
    Footnote
    Beitrag in einem special issue: Is there a catalog in your future? Celebrating Nancy J. Williamson: Scholar, educator, colleague, mentor
    Type
    a
  15. Galvão, R.M.: UNIMARC format relevance : maintenance or replacement? (2018) 0.00
    7.1242056E-4 = product of:
      0.0028496822 = sum of:
        0.0028496822 = product of:
          0.008549047 = sum of:
            0.008549047 = weight(_text_:a in 5163) [ClassicSimilarity], result of:
              0.008549047 = score(doc=5163,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.1544581 = fieldWeight in 5163, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5163)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    This article presents an empirical study focused on a qualitative analysis of the UNIMARC format. An analysis of the structural quality of the data provided by the format is evaluated to determine its current suitability for meeting the requirements and trends in data architecture for the information network and the Semantic Web. Driven by a set of quality characteristics that identify weaknesses in the data schema that cannot be bridged by simply converting data to MARC XML or RDF/XML, we conclude that the UNIMARC format is not compliant with the current metadata schema desiderata and must be replaced.
    Type
    a
  16. Tosaka, Y.; Park, J.-r.: RDA: Resource description & access : a survey of the current state of the art (2013) 0.00
    6.569507E-4 = product of:
      0.0026278028 = sum of:
        0.0026278028 = product of:
          0.007883408 = sum of:
            0.007883408 = weight(_text_:a in 677) [ClassicSimilarity], result of:
              0.007883408 = score(doc=677,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14243183 = fieldWeight in 677, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=677)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Resource Description & Access (RDA) is intended to provide a flexible and extensible framework that can accommodate all types of content and media within rapidly evolving digital environments while also maintaining compatibility with the Anglo-American Cataloguing Rules, 2nd edition (AACR2). The cataloging community is grappling with practical issues in navigating the transition from AACR2 to RDA; there is a definite need to evaluate major subject areas and broader themes in information organization under the new RDA paradigm. This article aims to accomplish this task through a thorough and critical review of the emerging RDA literature published from 2005 to 2011. The review mostly concerns key areas of difference between RDA and AACR2, the relationship of the new cataloging code to metadata standards, the impact on encoding standards such as Machine-Readable Cataloging (MARC), end user considerations, and practitioners' views on RDA implementation and training. Future research will require more in-depth studies of RDA's expected benefits and the manner in which the new cataloging code will improve resource retrieval and bibliographic control for users and catalogers alike over AACR2. The question as to how the cataloging community can best move forward to the post-AACR2/MARC environment must be addressed carefully so as to chart the future of bibliographic control in the evolving environment of information production, management, and use.
    Type
    a
  17. BIBFRAME Model Overview (2013) 0.00
    6.569507E-4 = product of:
      0.0026278028 = sum of:
        0.0026278028 = product of:
          0.007883408 = sum of:
            0.007883408 = weight(_text_:a in 763) [ClassicSimilarity], result of:
              0.007883408 = score(doc=763,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14243183 = fieldWeight in 763, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=763)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The Bibliographic Framework Transition Initiative is an undertaking by the Library of Congress and the community to better accommodate future needs of the library community. A major focus of the initiative will be to determine a transition path for the MARC 21 exchange format to more Web based, Linked Data standards. Zepheira and The Library of Congress are working together to develop a Linked Data model, vocabulary and enabling tools / services for supporting this Initiative. BIBFRAME.ORG is a central hub for this effort.
    Content
    Vgl. Kommentar Eversberg: Wer dranbleiben will am Puls der Zeit und speziell an der sich dynamisierenden Evolution eines neuen Datenformatkonzepts, der sollte sich langsam beeilen, sich mit BIBFRAME vertraut zu machen: http://bibframe.org Diese Startseite organisiert nun den Zugang zu allem, was schon vorliegt und präsentabel ist, und das ist allerhand. Wer erst mal nur schnuppern will und schauen, wie denn BIBFRAME-Daten wohl aussehen, gehe zur "demonstration area", wo man u.a. auch aufbereitete Daten der DNB findet. Es gibt ferner Online Tools, und darunter einen "Transformation service", dem man eigenes MARC-XML übergeben kann, damit er was draus mache. [Exporte mit unserem MARCXML.APR klappen nicht unmittelbar, man muß zumindest die in der Datei schon vorhandenen zwei Kopfzeilen aktivieren und ans Ende noch </collection> setzen. Und hierarchische Daten machen noch Probleme, die wir uns vornehmen müssen.] Wer jetzt denkt, "Was geht uns das alles an?", der lese die letzte Zeile, die da lautet: "BIBFRAME.ORG is a collaborative effort of US Library of Congress, Zepheira and you!"
  18. Xu, A.; Hess, K.; Akerman, L.: From MARC to BIBFRAME 2.0 : Crosswalks (2018) 0.00
    6.569507E-4 = product of:
      0.0026278028 = sum of:
        0.0026278028 = product of:
          0.007883408 = sum of:
            0.007883408 = weight(_text_:a in 5172) [ClassicSimilarity], result of:
              0.007883408 = score(doc=5172,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.14243183 = fieldWeight in 5172, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5172)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    One of the big challenges facing academic libraries today is to increase the relevance of the libraries to their user communities. If the libraries can increase the visibility of their resources on the open web, it will increase the chances of the libraries to reach to their user communities via the user's first search experience. BIBFRAME and library Linked Data will enable libraries to publish their resources in a way that the Web understands, consume Linked Data to enrich their resources relevant to the libraries' user communities, and visualize networks across collections. However, one of the important steps for transitioning to BIBFRAME and library Linked Data involves crosswalks, mapping MARC fields and subfields across data models and performing necessary data reformatting to be in compliance with the specifications of the new model, which is currently BIBFRAME 2.0. This article looks into how the Library of Congress has mapped library bibliographic data from the MARC format to the BIBFRAME 2.0 model and vocabulary published and updated since April 2016, available from http://www.loc.gov/bibframe/docs/index.html based on the recently released conversion specifications and converter, developed by the Library of Congress with input from many community members. The BIBFRAME 2.0 standard and conversion tools will enable libraries to transform bibliographic data from MARC into BIBFRAME 2.0, which introduces a Linked Data model as the improved method of bibliographic control for the future, and make bibliographic information more useful within and beyond library communities.
    Footnote
    Beitrag in einem Heft: 'Setting standards to work and live by: A memorial Festschrift for Valerie Bross'.
    Type
    a
  19. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.00
    6.106462E-4 = product of:
      0.0024425848 = sum of:
        0.0024425848 = product of:
          0.007327754 = sum of:
            0.007327754 = weight(_text_:a in 3967) [ClassicSimilarity], result of:
              0.007327754 = score(doc=3967,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.13239266 = fieldWeight in 3967, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3967)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.
  20. Boehr, D.L.; Bushman, B.: Preparing for the future : National Library of Medicine's® project to add MeSH® RDF URIs to its bibliographic and authority records (2018) 0.00
    6.106462E-4 = product of:
      0.0024425848 = sum of:
        0.0024425848 = product of:
          0.007327754 = sum of:
            0.007327754 = weight(_text_:a in 5173) [ClassicSimilarity], result of:
              0.007327754 = score(doc=5173,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.13239266 = fieldWeight in 5173, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5173)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Although it is not yet known for certain what will replace MARC, eventually bibliographic data will need to be transformed to move into a linked data environment. This article discusses why the National Library of Medicine chose to add Uniform Resource Identifiers for Medical Subject Headings as our starting point and details the process by which they were added to the MeSH MARC authority records, the legacy bibliographic records, and the records for newly cataloged items. The article outlines the various enhancement methods available, decisions made, and the rationale for the selected method.
    Footnote
    Beitrag in einem Heft: 'Setting standards to work and live by: A memorial Festschrift for Valerie Bross'.
    Type
    a