Search (14 results, page 1 of 1)

  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Bohne-Lang, A.: Semantische Metadaten für den Webauftritt einer Bibliothek (2016) 0.02
    0.01660582 = product of:
      0.07749383 = sum of:
        0.02465703 = weight(_text_:web in 3337) [ClassicSimilarity], result of:
          0.02465703 = score(doc=3337,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 3337, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
        0.047791965 = weight(_text_:bibliothek in 3337) [ClassicSimilarity], result of:
          0.047791965 = score(doc=3337,freq=6.0), product of:
            0.121660605 = queryWeight, product of:
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.029633347 = queryNorm
            0.39283025 = fieldWeight in 3337, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1055303 = idf(docFreq=1980, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
        0.0050448296 = weight(_text_:information in 3337) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3337,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3337)
      0.21428572 = coord(3/14)
    
    Abstract
    Das Semantic Web ist schon seit über 10 Jahren viel beachtet und hat mit der Verfügbarkeit von Resource Description Framework (RDF) und den entsprechenden Ontologien einen großen Sprung in die Praxis gemacht. Vertreter kleiner Bibliotheken und Bibliothekare mit geringer Technik-Affinität stehen aber im Alltag vor großen Hürden, z.B. bei der Frage, wie man diese Technik konkret in den eigenen Webauftritt einbinden kann: man kommt sich vor wie Don Quijote, der versucht die Windmühlen zu bezwingen. RDF mit seinen Ontologien ist fast unverständlich komplex für Nicht-Informatiker und somit für den praktischen Einsatz auf Bibliotheksseiten in der Breite nicht direkt zu gebrauchen. Mit Schema.org wurde ursprünglich von den drei größten Suchmaschinen der Welt Google, Bing und Yahoo eine einfach und effektive semantische Beschreibung von Entitäten entwickelt. Aktuell wird Schema.org durch Google, Microsoft, Yahoo und Yandex weiter gesponsert und von vielen weiteren Suchmaschinen verstanden. Vor diesem Hintergrund hat die Bibliothek der Medizinischen Fakultät Mannheim auf ihrer Homepage (http://www.umm.uni-heidelberg.de/bibl/) verschiedene maschinenlesbare semantische Metadaten eingebettet. Sehr interessant und zukunftsweisend ist die neueste Entwicklung von Schema.org, bei der man eine 'Library' (https://schema.org/Library) mit Öffnungszeiten und vielem mehr modellieren kann. Ferner haben wir noch semantische Metadaten im Open Graph- und Dublin Core-Format eingebettet, um alte Standards und Facebook-konforme Informationen maschinenlesbar zur Verfügung zu stellen.
    Source
    GMS Medizin - Bibliothek - Information. 16(2016) Nr.3, 11 S. [http://www.egms.de/static/pdf/journals/mbi/2017-16/mbi000372.pdf]
    Theme
    Semantic Web
  2. What is Schema.org? (2011) 0.01
    0.00639995 = product of:
      0.04479965 = sum of:
        0.036238287 = weight(_text_:web in 4437) [ClassicSimilarity], result of:
          0.036238287 = score(doc=4437,freq=6.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.37471575 = fieldWeight in 4437, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4437)
        0.00856136 = weight(_text_:information in 4437) [ClassicSimilarity], result of:
          0.00856136 = score(doc=4437,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 4437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4437)
      0.14285715 = coord(2/14)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  3. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.0062901457 = product of:
      0.044031017 = sum of:
        0.038986187 = weight(_text_:web in 3895) [ClassicSimilarity], result of:
          0.038986187 = score(doc=3895,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.40312994 = fieldWeight in 3895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.0050448296 = weight(_text_:information in 3895) [ClassicSimilarity], result of:
          0.0050448296 = score(doc=3895,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.09697737 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.14285715 = coord(2/14)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  4. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.01
    0.0056086686 = product of:
      0.039260678 = sum of:
        0.03118895 = weight(_text_:web in 3965) [ClassicSimilarity], result of:
          0.03118895 = score(doc=3965,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.32250395 = fieldWeight in 3965, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.008071727 = weight(_text_:information in 3965) [ClassicSimilarity], result of:
          0.008071727 = score(doc=3965,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1551638 = fieldWeight in 3965, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  5. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.00
    0.004478364 = product of:
      0.031348545 = sum of:
        0.02465703 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.02465703 = score(doc=4550,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.020074548 = score(doc=4550,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  6. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.00
    0.0028179463 = product of:
      0.039451245 = sum of:
        0.039451245 = weight(_text_:web in 1075) [ClassicSimilarity], result of:
          0.039451245 = score(doc=1075,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.4079388 = fieldWeight in 1075, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1075)
      0.071428575 = coord(1/14)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  7. Söhler, M.: Schluss mit Schema F (2011) 0.00
    0.0022277823 = product of:
      0.03118895 = sum of:
        0.03118895 = weight(_text_:web in 4439) [ClassicSimilarity], result of:
          0.03118895 = score(doc=4439,freq=10.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.32250395 = fieldWeight in 4439, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4439)
      0.071428575 = coord(1/14)
    
    Abstract
    Mit Schema.org und dem semantischen Web sollen Suchmaschinen verstehen lernen
    Content
    "Wörter haben oft mehrere Bedeutungen. Einige kennen den "Kanal" als künstliche Wasserstraße, andere vom Fernsehen. Die Waage kann zum Erfassen des Gewichts nützlich sein oder zur Orientierung auf der Horoskopseite. Casablanca ist eine Stadt und ein Film zugleich. Wo Menschen mit der Zeit Bedeutungen unterscheiden und verarbeiten lernen, können dies Suchmaschinen von selbst nicht. Stets listen sie dumpf hintereinander weg alles auf, was sie zu einem Thema finden. Damit das nicht so bleibt, haben sich nun Google, Yahoo und die zu Microsoft gehörende Suchmaschine Bing zusammengetan, um der Suche im Netz mehr Verständnis zu verpassen. Man spricht dabei auch von einer "semantischen Suche". Das Ergebnis heißt Schema.org. Wer die Webseite einmal besucht, sich ein wenig in die Unterstrukturen hereinklickt und weder Vorkenntnisse im Programmieren noch im Bereich des semantischen Webs hat, wird sich überfordert und gelangweilt wieder abwenden. Doch was hier entstehen könnte, hat das Zeug dazu, Teile des Netzes und speziell die Funktionen von Suchmaschinen mittel- oder langfristig zu verändern. "Große Player sind dabei, sich auf Standards zu einigen", sagt Daniel Bahls, Spezialist für Semantische Technologien beim ZBW Leibniz-Informationszentrum Wirtschaft in Hamburg. "Die semantischen Technologien stehen schon seit Jahren im Raum und wurden bisher nur im kleineren Kontext verwendet." Denn Schema.org lädt Entwickler, Forscher, die Semantic-Web-Community und am Ende auch alle Betreiber von Websites dazu ein, an der Umgestaltung der Suche im Netz mitzuwirken. Inhalte von Websites sollen mit einem speziellen, aber einheitlichen Vokabular für die Crawler - die Analyseprogramme der Suchmaschinen - gekennzeichnet und aufbereitet werden.
    Indem Schlagworte, sogenannte Tags, in den für Normal-User nicht sichtbaren Teil des Codes von Websites eingebettet werden, sind Suchmachinen nicht mehr so sehr auf die Analyse der natürlichen Sprache angewiesen, um Texte inhaltlich zu erfassen. Im Blog ZBW Mediatalk wird dies als "Semantic Web light" bezeichnet - ein semantisches Web auf niedrigster Ebene. Aber selbst das werde "schon viel bewirken", meint Bahls. "Das semantische Web wird sich über die nächsten Jahrzehnte evolutionär weiterentwickeln." Einen "Abschluss" werde es nie geben, "da eine einheitliche Formalisierung von Begrifflichkeiten auf feiner Stufe kaum möglich ist". Die Ergebnisse aus Schema.org würden "zeitnah" in die Suchmaschine integriert, "denn einen Zeitplan" gebe es nicht, so Stefan Keuchel, Pressesprecher von Google Deutschland. Bis das so weit ist, hilft der Verweis von Daniel Bahns auf die bereits existierende semantische Suchmaschine Sig.ma. Geschwindigkeit und Menge der Ergebnisse nach einer Suchanfrage spielen hier keine Rolle. Sig.ma sammelt seine Informationen allein im Bereich des semantischen Webs und listet nach einer Anfrage alles Bekannte strukturiert auf.
  8. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.00
    0.0017612164 = product of:
      0.02465703 = sum of:
        0.02465703 = weight(_text_:web in 3372) [ClassicSimilarity], result of:
          0.02465703 = score(doc=3372,freq=4.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25496176 = fieldWeight in 3372, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3372)
      0.071428575 = coord(1/14)
    
    Abstract
    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
  9. Söhler, M.: "Dumm wie Google" war gestern : semantische Suche im Netz (2011) 0.00
    0.0017435154 = product of:
      0.024409214 = sum of:
        0.024409214 = weight(_text_:web in 4440) [ClassicSimilarity], result of:
          0.024409214 = score(doc=4440,freq=8.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 4440, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4440)
      0.071428575 = coord(1/14)
    
    Content
    - Neue Standards Doch was hier entstehen könnte, hat das Zeug dazu, Teile des Netzes und speziell die Funktionen von Suchmaschinen mittel- oder langfristig zu verändern. "Große Player sind dabei, sich auf Standards zu einigen", sagt Daniel Bahls, Spezialist für Semantische Technologien beim ZBW Leibniz-Informationszentrum Wirtschaft in Hamburg. "Die semantischen Technologien stehen schon seit Jahren im Raum und wurden bisher nur im kleineren Kontext verwendet." Denn Schema.org lädt Entwickler, Forscher, die Semantic-Web-Community und am Ende auch alle Betreiber von Websites dazu ein, an der Umgestaltung der Suche im Netz mitzuwirken. "Damit wollen Google, Bing und Yahoo! dem Info-Chaos im WWW den Garaus machen", schreibt André Vatter im Blog ZBW Mediatalk. Inhalte von Websites sollen mit einem speziellen, aber einheitlichen Vokabular für die Crawler der Suchmaschinen gekennzeichnet und aufbereitet werden. Indem Schlagworte, so genannte Tags, in den Code von Websites eingebettet werden, sind Suchmachinen nicht mehr so sehr auf die Analyse der natürlichen Sprache angewiesen, um Texte inhaltlich zu erfassen. Im Blog wird dies als "Semantic Web light" bezeichnet - ein semantisches Web auf niedrigster Ebene. Aber selbst das werde "schon viel bewirken", meint Bahls. "Das semantische Web wird sich über die nächsten Jahrzehnte evolutionär weiterentwickeln." Einen "Abschluss" werde es nie geben, "da eine einheitliche Formalisierung von Begrifflichkeiten auf feiner Stufe kaum möglich ist."
  10. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.00
    0.0014944416 = product of:
      0.020922182 = sum of:
        0.020922182 = weight(_text_:web in 3523) [ClassicSimilarity], result of:
          0.020922182 = score(doc=3523,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 3523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
      0.071428575 = coord(1/14)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  11. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.00
    0.0014944416 = product of:
      0.020922182 = sum of:
        0.020922182 = weight(_text_:web in 3896) [ClassicSimilarity], result of:
          0.020922182 = score(doc=3896,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.21634221 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
      0.071428575 = coord(1/14)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  12. ¬The Dublin Core Metadata Element Set (2012) 0.00
    0.0010192095 = product of:
      0.014268933 = sum of:
        0.014268933 = weight(_text_:information in 4790) [ClassicSimilarity], result of:
          0.014268933 = score(doc=4790,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.27429342 = fieldWeight in 4790, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=4790)
      0.071428575 = coord(1/14)
    
    Abstract
    Defines fifteen metadata elements for resource description in a cross-disciplinary information environment.
    Editor
    National Information Standards Organization
  13. Riley, J.: Understanding metadata : what is metadata, and what is it for? (2017) 0.00
    7.2068995E-4 = product of:
      0.010089659 = sum of:
        0.010089659 = weight(_text_:information in 2005) [ClassicSimilarity], result of:
          0.010089659 = score(doc=2005,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 2005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2005)
      0.071428575 = coord(1/14)
    
    Imprint
    Baltimore : National Information Standards Organization
  14. Suranofsky, M.; McColl, L.: a Google sheets add-on that uses the WorldCat search API : MatchMarc (2019) 0.00
    6.115257E-4 = product of:
      0.00856136 = sum of:
        0.00856136 = weight(_text_:information in 5442) [ClassicSimilarity], result of:
          0.00856136 = score(doc=5442,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 5442, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5442)
      0.071428575 = coord(1/14)
    
    Abstract
    Lehigh University Libraries has developed a new tool for querying WorldCat using the WorldCat Search API. The tool is a Google Sheet Add-on and is available now via the Google Sheets Add-ons menu under the name "MatchMarc." The add-on is easily customizable, with no knowledge of coding needed. The tool will return a single "best" OCLC record number, and its bibliographic information for a given ISBN or LCCN, allowing the user to set up and define "best." Because all of the information, the input, the criteria, and the results exist in the Google Sheets environment, efficient workflows can be developed from this flexible starting point. This article will discuss the development of the add-on, how it works, and future plans for development.