Search (37 results, page 1 of 2)

  • × theme_ss:"Metadaten"
  • × type_ss:"el"
  1. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.04
    0.037206076 = product of:
      0.18603037 = sum of:
        0.18603037 = weight(_text_:readable in 1155) [ClassicSimilarity], result of:
          0.18603037 = score(doc=1155,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.67199206 = fieldWeight in 1155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.2 = coord(1/5)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  2. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.03
    0.028453577 = product of:
      0.07113394 = sum of:
        0.045269795 = weight(_text_:bibliographic in 3523) [ClassicSimilarity], result of:
          0.045269795 = score(doc=3523,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 3523, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
        0.025864149 = product of:
          0.051728297 = sum of:
            0.051728297 = weight(_text_:data in 3523) [ClassicSimilarity], result of:
              0.051728297 = score(doc=3523,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3630661 = fieldWeight in 3523, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3523)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  3. Miller, E.: ¬An introduction to the Resource Description Framework (1998) 0.03
    0.026308669 = product of:
      0.13154334 = sum of:
        0.13154334 = weight(_text_:readable in 1231) [ClassicSimilarity], result of:
          0.13154334 = score(doc=1231,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47517014 = fieldWeight in 1231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1231)
      0.2 = coord(1/5)
    
    Abstract
    The Resource Description Framework (RDF) is an infrastructure that enables the encoding, exchange and reuse of structured metadata. RDF is an application of XML that imposes needed structural constraints to provide unambiguous methods of expressing semantics. RDF additionally provides a means for publishing both human-readable and machine-processable vocabularies designed to encourage the reuse and extension of metadata semantics among disparate information communities. The structural constraints RDF imposes to support the consistent encoding and exchange of standardized metadata provides for the interchangeability of separate packages of metadata defined by different resource description communities.
  4. McCallum, S.M.: Extending MARC for bibliographic control in the Web environment : Challenges and alternatives (2000) 0.03
    0.025608465 = product of:
      0.12804233 = sum of:
        0.12804233 = weight(_text_:bibliographic in 6803) [ClassicSimilarity], result of:
          0.12804233 = score(doc=6803,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 6803, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=6803)
      0.2 = coord(1/5)
    
    Footnote
    Paper for the conference 'Bibliographic control for the new millennium' held in Washington, DC at the Library of Congress, November 2000
  5. Caplan, P.: International metadata initiatives : lessons in bibliographic control (2000) 0.03
    0.025608465 = product of:
      0.12804233 = sum of:
        0.12804233 = weight(_text_:bibliographic in 6804) [ClassicSimilarity], result of:
          0.12804233 = score(doc=6804,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 6804, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=6804)
      0.2 = coord(1/5)
    
    Footnote
    Paper for the conference 'Bibliographic control for the new millennium' held in Washington, DC at the Library of Congress, November 2000
  6. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.02
    0.023969416 = product of:
      0.059923537 = sum of:
        0.042680774 = weight(_text_:bibliographic in 3965) [ClassicSimilarity], result of:
          0.042680774 = score(doc=3965,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.24331525 = fieldWeight in 3965, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 3965) [ClassicSimilarity], result of:
              0.03448553 = score(doc=3965,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 3965, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
  7. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.02
    0.02260745 = product of:
      0.05651862 = sum of:
        0.030179864 = weight(_text_:bibliographic in 1076) [ClassicSimilarity], result of:
          0.030179864 = score(doc=1076,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.17204987 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
        0.02633876 = product of:
          0.05267752 = sum of:
            0.05267752 = weight(_text_:data in 1076) [ClassicSimilarity], result of:
              0.05267752 = score(doc=1076,freq=14.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.36972845 = fieldWeight in 1076, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1076)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
    Content
    FULL PAPERS Provenance and Annotations for Linked Data - Kai Eckert How Portable Are the Metadata Standards for Scientific Data? A Proposal for a Metadata Infrastructure - Jian Qin, Kai Li Lessons Learned in Implementing the Extended Date/Time Format in a Large Digital Library - Hannah Tarver, Mark Phillips Towards the Representation of Chinese Traditional Music: A State of the Art Review of Music Metadata Standards - Mi Tian, György Fazekas, Dawn Black, Mark Sandler Maps and Gaps: Strategies for Vocabulary Design and Development - Diane Ileana Hillmann, Gordon Dunsire, Jon Phipps A Method for the Development of Dublin Core Application Profiles (Me4DCAP V0.1): Aescription - Mariana Curado Malta, Ana Alice Baptista Find and Combine Vocabularies to Design Metadata Application Profiles using Schema Registries and LOD Resources - Tsunagu Honma, Mitsuharu Nagamori, Shigeo Sugimoto Achieving Interoperability between the CARARE Schema for Monuments and Sites and the Europeana Data Model - Antoine Isaac, Valentine Charles, Kate Fernie, Costis Dallas, Dimitris Gavrilis, Stavros Angelis With a Focused Intent: Evolution of DCMI as a Research Community - Jihee Beak, Richard P. Smiraglia Metadata Capital in a Data Repository - Jane Greenberg, Shea Swauger, Elena Feinstein DC Metadata is Alive and Well - A New Standard for Education - Liddy Nevile Representation of the UNIMARC Bibliographic Data Format in Resource Description Framework - Gordon Dunsire, Mirna Willer, Predrag Perozic
  8. Bearman, D.; Miller, E.; Rust, G.; Trant, J.; Weibel, S.: ¬A common model to support interoperable metadata : progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI communities (1999) 0.02
    0.020067489 = product of:
      0.050168723 = sum of:
        0.03772483 = weight(_text_:bibliographic in 1249) [ClassicSimilarity], result of:
          0.03772483 = score(doc=1249,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 1249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1249)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 1249) [ClassicSimilarity], result of:
              0.024887787 = score(doc=1249,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 1249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1249)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools.
  9. Kaparova, N.; Shwartsman, M.: Creation of the electronic resources metadatabase in russia : problems and prospects (2000) 0.02
    0.018107919 = product of:
      0.09053959 = sum of:
        0.09053959 = weight(_text_:bibliographic in 5405) [ClassicSimilarity], result of:
          0.09053959 = score(doc=5405,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.5161496 = fieldWeight in 5405, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=5405)
      0.2 = coord(1/5)
    
    Footnote
    Vortrag, IFLA General Conference, Divison IV Bibliographic Control, Jerusalem, 2000
  10. Dillon, M.: Metadata for Web resources : how metadata works on the Web (2000) 0.02
    0.018107919 = product of:
      0.09053959 = sum of:
        0.09053959 = weight(_text_:bibliographic in 6798) [ClassicSimilarity], result of:
          0.09053959 = score(doc=6798,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.5161496 = fieldWeight in 6798, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=6798)
      0.2 = coord(1/5)
    
    Footnote
    Paper for the conference 'Bibliographic control for the new millennium' held in Washington, DC at the Library of Congress, November 2000
  11. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.01
    0.0132987825 = product of:
      0.06649391 = sum of:
        0.06649391 = sum of:
          0.029865343 = weight(_text_:data in 266) [ClassicSimilarity], result of:
            0.029865343 = score(doc=266,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.2096163 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
          0.036628567 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
            0.036628567 = score(doc=266,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.23214069 = fieldWeight in 266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=266)
      0.2 = coord(1/5)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.928 vom 31.05.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzI5OSwiMjc2N2ZlZjQwMDUwIiwwLDAsMjY4LDFd]
  12. Baker, T.: Languages for Dublin Core (1998) 0.01
    0.013154334 = product of:
      0.06577167 = sum of:
        0.06577167 = weight(_text_:readable in 1257) [ClassicSimilarity], result of:
          0.06577167 = score(doc=1257,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.23758507 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.2 = coord(1/5)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  13. Suranofsky, M.; McColl, L.: a Google sheets add-on that uses the WorldCat search API : MatchMarc (2019) 0.01
    0.0090539595 = product of:
      0.045269795 = sum of:
        0.045269795 = weight(_text_:bibliographic in 5442) [ClassicSimilarity], result of:
          0.045269795 = score(doc=5442,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.2580748 = fieldWeight in 5442, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=5442)
      0.2 = coord(1/5)
    
    Abstract
    Lehigh University Libraries has developed a new tool for querying WorldCat using the WorldCat Search API. The tool is a Google Sheet Add-on and is available now via the Google Sheets Add-ons menu under the name "MatchMarc." The add-on is easily customizable, with no knowledge of coding needed. The tool will return a single "best" OCLC record number, and its bibliographic information for a given ISBN or LCCN, allowing the user to set up and define "best." Because all of the information, the input, the criteria, and the results exist in the Google Sheets environment, efficient workflows can be developed from this flexible starting point. This article will discuss the development of the add-on, how it works, and future plans for development.
  14. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.008865855 = product of:
      0.044329274 = sum of:
        0.044329274 = sum of:
          0.01991023 = weight(_text_:data in 1236) [ClassicSimilarity], result of:
            0.01991023 = score(doc=1236,freq=2.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.1397442 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.024419045 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.024419045 = score(doc=1236,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.2 = coord(1/5)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  15. Daniel Jr., R.; Lagoze, C.: Extending the Warwick framework : from metadata containers to active digital objects (1997) 0.01
    0.007983512 = product of:
      0.039917562 = sum of:
        0.039917562 = product of:
          0.079835124 = sum of:
            0.079835124 = weight(_text_:data in 1264) [ClassicSimilarity], result of:
              0.079835124 = score(doc=1264,freq=42.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.56033987 = fieldWeight in 1264, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1264)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Defining metadata as "data about data" provokes more questions than it answers. What are the forms of the data and metadata? Can we be more specific about the manner in which the metadata is "about" the data? Are data and metadata distinguished only in the context of their relationship? Is the nature of the relationship between the datasets declarative or procedural? Can the metadata itself be described by other data? Over the past several years, we have been engaged in a number of efforts examining the role, format, composition, and architecture of metadata for networked resources. During this time, we have noticed the tendency to be led astray by comfortable, but somewhat inappropriate, models in the non-digital information environment. Rather than pursuing familiar models, there is the need for a new model that fully exploits the unique combination of computation and connectivity that characterizes the digital library. In this paper, we describe an extension of the Warwick Framework that we call Distributed Active Relationships (DARs). DARs provide a powerful model for representing data and metadata in digital library objects. They explicitly express the relationships between networked resources, and even allow those relationships to be dynamically downloadable and executable. The DAR model is based on the following principles, which our examination of the "data about data" definition has led us to regard as axiomatic: * There is no essential distinction between data and metadata. We can only make such a distinction in terms of a particular "about" relationship. As a result, what is metadata in the context of one "about" relationship may be data in another. * There is no single "about" relationship. There are many different and important relationships between data resources. * Resources can be related without regard for their location. The connectivity in networked information architectures makes it possible to have data in one repository describe data in another repository. * The computational power of the networked information environment makes it possible to consider active or dynamic relationships between data sets. This adds considerable power to the "data about data" definition. First, data about another data set may not physically exist, but may be automatically derived. Second, the "about" relationship may be an executable object -- in a sense interpretable metadata. As will be shown, this provides useful mechanisms for handling complex metadata problems such as rights management of digital objects. The remainder of this paper describes the development and consequences of the DAR model. Section 2 reviews the Warwick Framework, which is the basis for the model described in this paper. Section 3 examines the concept of the Warwick Framework Catalog, which provides a mechanism for expressing the relationships between the packages in a Warwick Framework container. With that background established, section 4 generalizes the Warwick Framework by removing the restriction that it only contains "metadata". This allows us to consider digital library objects that are aggregations of (possibly distributed) data sets, with the relationships between the data sets expressed using a Warwick Framework Catalog. Section 5 further extends the model by describing Distributed Active Relationships (DARs). DARs are the explicit relationships that have the potential to be executable, as alluded to earlier. Finally, section 6 describes two possible implementations of these concepts.
  16. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.01
    0.0073257135 = product of:
      0.036628567 = sum of:
        0.036628567 = product of:
          0.07325713 = sum of:
            0.07325713 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.07325713 = score(doc=6048,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2007 15:41:14
  17. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.006096238 = product of:
      0.03048119 = sum of:
        0.03048119 = product of:
          0.06096238 = sum of:
            0.06096238 = weight(_text_:data in 3895) [ClassicSimilarity], result of:
              0.06096238 = score(doc=3895,freq=12.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4278775 = fieldWeight in 3895, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  18. Baca, M.; O'Keefe, E.: Sharing standards and expertise in the early 21st century : Moving toward a collaborative, "cross-community" model for metadata creation (2008) 0.01
    0.0059730685 = product of:
      0.029865343 = sum of:
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 2321) [ClassicSimilarity], result of:
              0.059730686 = score(doc=2321,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 2321, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2321)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This paper provides a brief overview of the evolving descriptive metadata landscape, one phenomenon of which can be characterized as "cross-community" metadata as manifested in records that are the result of a combination of carefully considered data value and data content standards. he online catalog of the Morgan Library & Museum provides a real-life illustration of how diverse data content standards and vocabulary tools can be integrated within the classic data structure/technical interchange format of MARC21 to better describe unique, museum-type objects, and to provide better end-user access and understanding. The Morgan experience also shows the value of developing a collaborative model for metadata creation that combines the subject expertise of curators and scholars with the cataloging expertise and knowledge of standards possessed by librarians.
  19. What is Schema.org? (2011) 0.01
    0.0059730685 = product of:
      0.029865343 = sum of:
        0.029865343 = product of:
          0.059730686 = sum of:
            0.059730686 = weight(_text_:data in 4437) [ClassicSimilarity], result of:
              0.059730686 = score(doc=4437,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4192326 = fieldWeight in 4437, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4437)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  20. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.01
    0.005565079 = product of:
      0.027825395 = sum of:
        0.027825395 = product of:
          0.05565079 = sum of:
            0.05565079 = weight(_text_:data in 3870) [ClassicSimilarity], result of:
              0.05565079 = score(doc=3870,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.39059696 = fieldWeight in 3870, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3870)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.