Search (59 results, page 2 of 3)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.01
    0.007908144 = product of:
      0.03954072 = sum of:
        0.03954072 = weight(_text_:system in 3896) [ClassicSimilarity], result of:
          0.03954072 = score(doc=3896,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 3896, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
      0.2 = coord(1/5)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  2. Liu, X.; Qin, J.: ¬An interactive metadata model for structural, descriptive, and referential representation of scholarly output (2014) 0.01
    0.0065901205 = product of:
      0.032950602 = sum of:
        0.032950602 = weight(_text_:system in 1253) [ClassicSimilarity], result of:
          0.032950602 = score(doc=1253,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 1253, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1253)
      0.2 = coord(1/5)
    
    Abstract
    The scientific metadata model proposed in this article encompasses both classical descriptive metadata such as those defined in the Dublin Core Metadata Element Set (DC) and the innovative structural and referential metadata properties that go beyond the classical model. Structural metadata capture the structural vocabulary in research publications; referential metadata include not only citations but also data about other types of scholarly output that is based on or related to the same publication. The article describes the structural, descriptive, and referential (SDR) elements of the metadata model and explains the underlying assumptions and justifications for each major component in the model. ScholarWiki, an experimental system developed as a proof of concept, was built over the wiki platform to allow user interaction with the metadata and the editing, deleting, and adding of metadata. By allowing and encouraging scholars (both as authors and as users) to participate in the knowledge and metadata editing and enhancing process, the larger community will benefit from more accurate and effective information retrieval. The ScholarWiki system utilizes machine-learning techniques that can automatically produce self-enhanced metadata by learning from the structural metadata that scholars contribute, which will add intelligence to enhance and update automatically the publication of metadata Wiki pages.
  3. Sturmane, A.; Eglite, E.; Jankevica-Balode, M.: Subject metadata development for digital resources in Latvia (2014) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 1963) [ClassicSimilarity], result of:
          0.03261943 = score(doc=1963,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 1963, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1963)
      0.2 = coord(1/5)
    
    Abstract
    The National Library of Latvia (NLL) made a decision to use the Library of Congress Subject Headings (LCSH) in 2000. At present the NLL Subject Headings Database in Latvian holds approximately 34,000 subject headings and is used for subject cataloging of textual resources, including articles from serials. For digital objects NLL uses a system like Faceted Application of Subject Terminology (FAST). We succesfully use it in the project "In Search of Lost Latvia," one of the milestones in the development of the subject cataloging of digital resources in Latvia.
  4. Moulaison Sandy, H.L.; Dykas, F.: High-quality metadata and repository staffing : perceptions of United States-based OpenDOAR participants (2016) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 2806) [ClassicSimilarity], result of:
          0.03261943 = score(doc=2806,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 2806, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2806)
      0.2 = coord(1/5)
    
    Abstract
    Digital repositories require good metadata, created according to community-based principles that include provisions for interoperability. When metadata is of high quality, digital objects become sharable and metadata can be harvested and reused outside of the local system. A sample of U.S.-based repository administrators from the OpenDOAR initiative were surveyed to understand aspects of the quality and creation of their metadata, and how their metadata could improve. Most respondents (65%) thought their metadata was of average quality; none thought their metadata was high quality or poor quality. The discussion argues that increased strategic staffing will alleviate many perceived issues with metadata quality.
  5. Holzhause, R.; Krömker, H.; Schnöll, M.: Vernetzung von audiovisuellen Inhalten und Metadaten : Metadatengestütztes System zur Generierung und Erschließung von Medienfragmenten (Teil 1) (2016) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 5636) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5636,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5636)
      0.2 = coord(1/5)
    
  6. Holzhause, R.; Krömker, H.; Schnöll, M.: Vernetzung von audiovisuellen Inhalten und Metadaten : Metadatengestütztes System zur Generierung und Erschließung von Medienfragmenten (Teil 2) (2016) 0.01
    0.006523886 = product of:
      0.03261943 = sum of:
        0.03261943 = weight(_text_:system in 5861) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5861,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5861)
      0.2 = coord(1/5)
    
  7. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.01
    0.0064557428 = product of:
      0.032278713 = sum of:
        0.032278713 = weight(_text_:context in 1076) [ClassicSimilarity], result of:
          0.032278713 = score(doc=1076,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
      0.2 = coord(1/5)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
  8. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.01
    0.0064557428 = product of:
      0.032278713 = sum of:
        0.032278713 = weight(_text_:context in 2320) [ClassicSimilarity], result of:
          0.032278713 = score(doc=2320,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 2320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
  9. Birkner, M.; Gonter, G.; Lackner, K.; Kann, B.; Kranewitter, M.; Mayer, A.; Parschalk, A.: Guideline zur Langzeitarchivierung (2016) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 3139) [ClassicSimilarity], result of:
          0.027959513 = score(doc=3139,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 3139, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3139)
      0.2 = coord(1/5)
    
    Abstract
    Guideline zur Hilfestellung für die Langzeitarchivierung von Daten und Objekten im Kontext des Publikations- und Forschungswesens. Diese Guideline soll Hilfestellung für die Langzeitarchivierung von Daten und Objekten im Kontext des Publikations- und Forschungswesens bieten. Sie ist ausdrücklich nicht für den Kontext der Compliance anwendbar. Sie soll dazu ermächtigen, die richtigen Fragen zur Auswahl der für die eigene Institution geeigneten Langzeitarchivierungs-Lösung zu stellen und bei der Entscheidung für eine System-Lösung behilflich sein. Langzeitarchivierungssysteme werden hier als Systeme verstanden, die im Workflow hinter einem Repositorium stehen, in dem digitale Objekte und ihre Metadaten gespeichert und angezeigt werden sowie recherchierbar sind. Allfällige Begriffserklärungen finden Sie im Glossar des Clusters C (Aufbau eines Wissensnetzwerks: Erarbeitung eines Referenzmodells für den Aufbau von Repositorien / Themenbereich Terminologie und Standards).
  10. Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 5443) [ClassicSimilarity], result of:
          0.023299592 = score(doc=5443,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 5443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5443)
      0.2 = coord(1/5)
    
    Abstract
    Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
  11. Hodges, D.W.; Schlottmann, K.: better archival migration outcomes with Python and the Google Sheets API : Reporting from the archives (2019) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 5444) [ClassicSimilarity], result of:
          0.023299592 = score(doc=5444,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 5444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5444)
      0.2 = coord(1/5)
    
    Abstract
    Columbia University Libraries recently embarked on a multi-phase project to migrate nearly 4,000 records describing over 70,000 linear feet of archival material from disparate sources and formats into ArchivesSpace. This paper discusses tools and methods brought to bear in Phase 2 of this project, which required us to look closely at how to integrate a large number of legacy finding aids into the new system and merge descriptive data that had diverged in myriad ways. Using Python, XSLT, and a widely available if underappreciated resource-the Google Sheets API-archival and technical library staff devised ways to efficiently report data from different sources, and present it in an accessible, user-friendly way,. Responses were then fed back into automated data remediation processes to keep the migration project on track and minimize manual intervention. The scripts and processes developed proved very effective, and moreover, show promise well beyond the ArchivesSpace migration. This paper describes the Python/XSLT/Sheets API processes developed and how they opened a path to move beyond CSV-based reporting with flexible, ad-hoc data interfaces easily adaptable to meet a variety of purposes.
  12. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.00
    0.0038404248 = product of:
      0.019202124 = sum of:
        0.019202124 = product of:
          0.057606373 = sum of:
            0.057606373 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.057606373 = score(doc=3281,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38690117 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  13. Mora-Mcginity, M. et al.: MusicWeb: music discovery with open linked semantic metadata (2016) 0.00
    0.0038404248 = product of:
      0.019202124 = sum of:
        0.019202124 = product of:
          0.057606373 = sum of:
            0.057606373 = weight(_text_:22 in 3282) [ClassicSimilarity], result of:
              0.057606373 = score(doc=3282,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.38690117 = fieldWeight in 3282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3282)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  14. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 3965) [ClassicSimilarity], result of:
          0.018639674 = score(doc=3965,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 3965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
  15. Razum, M.; Schwichtenberg, F.: Metadatenkonzept für dynamische Daten : BW-eLabs Report (2012) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 994) [ClassicSimilarity], result of:
          0.018639674 = score(doc=994,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 994, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=994)
      0.2 = coord(1/5)
    
    Abstract
    1.1. Dynamische Daten - Unter "dynamischen Daten" verstehen wir im Projekt BW-eLabs die im Rahmen von (virtuellen bzw. remote durchgeführten) Experimenten anfallenden Forschungsdaten, meist in Form von Messwerten. Im Falle des FMF (Freiburger Materialforschungsinstitut) sind dies überwiegend Spektren, im Falle des ITO (Institut für technische Optik an der Universität Stuttgart) Hologramme. Ein wichtiger Aspekt von Forschungsdaten, und damit auch für die dynamischen Daten, ist die Verknüpfung der Messwerte mit Kalibrierungs- und Konfigurationsdaten. Erst in der Gesamtbeschau lassen sich dynamische Daten so sinnvoll interpretieren. Weiterhin - und das ist eben der dynamische Gesichtspunkt bei dieser Art von Daten - verändert sich im Verlauf des wissenschaftlichen Arbeitsprozesses die Sicht auf die Daten. Die Rohdaten, die das System direkt im Labor am Gerät erfasst, lassen sich in folgenden Analyseschritten visualisieren und/oder aggregieren. Diese Schritte erzeugen neue Versionen der Daten bzw. abgeleitete Datenobjekte, die wiederum mit den Ausgangsdaten verknüpft werden. Ein Ziel von BW-eLabs ist es, diese dynamischen Daten nicht nur den Forschern innerhalb eines Projekts (also z.B. dem lokal arbeitenden Wissenschaftler und seiner entfernt arbeitenden Kollegin) verfügbar zu machen, sondern ausgewählte Datenobjekte mit einem Persistent Identifier zu versehen, um diese anschließend publizieren und damit zitierbar machen zu können.
  16. Managing metadata in web-scale discovery systems (2016) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 3336) [ClassicSimilarity], result of:
          0.018639674 = score(doc=3336,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 3336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
      0.2 = coord(1/5)
    
    Abstract
    This book shows you how to harness the power of linked data and web-scale discovery systems to manage and link widely varied content across your library collection. Libraries are increasingly using web-scale discovery systems to help clients find a wide assortment of library materials, including books, journal articles, special collections, archival collections, videos, music and open access collections. Depending on the library material catalogued, the discovery system might need to negotiate different metadata standards, such as AACR, RDA, RAD, FOAF, VRA Core, METS, MODS, RDF and more. In Managing Metadata in Web-Scale Discovery Systems, editor Louise Spiteri and a range of international experts show you how to: * maximize the effectiveness of web-scale discovery systems * provide a smooth and seamless discovery experience to your users * help users conduct searches that yield relevant results * manage the sheer volume of items to which you can provide access, so your users can actually find what they need * maintain shared records that reflect the needs, languages, and identities of culturally and ethnically varied communities * manage metadata both within, across, and outside, library discovery tools by converting your library metadata to linked open data that all systems can access * manage user generated metadata from external services such as Goodreads and LibraryThing * mine user generated metadata to better serve your users in areas such as collection development or readers' advisory. The book will be essential reading for cataloguers, technical services and systems librarians and library and information science students studying modules on metadata, cataloguing, systems design, data management, and digital libraries. The book will also be of interest to those managing metadata in archives, museums and other cultural heritage institutions.
  17. Pomerantz, J.: Metadata (2015) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 3800) [ClassicSimilarity], result of:
          0.018639674 = score(doc=3800,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 3800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3800)
      0.2 = coord(1/5)
    
    Abstract
    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades into the background; everyone (except perhaps the NSA) takes it for granted. Pomerantz explains what metadata is, and why it exists. He distinguishes among different types of metadata -- descriptive, administrative, structural, preservation, and use -- and examines different users and uses of each type. He discusses the technologies that make modern metadata possible, and he speculates about metadata's future. By the end of the book, readers will see metadata everywhere. Because, Pomerantz warns us, it's metadata's world, and we are just living in it.
  18. Maron, D.; Feinberg, M.: What does it mean to adopt a metadata standard? : a case study of Omeka and the Dublin Core (2018) 0.00
    0.003727935 = product of:
      0.018639674 = sum of:
        0.018639674 = weight(_text_:system in 4248) [ClassicSimilarity], result of:
          0.018639674 = score(doc=4248,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=4248)
      0.2 = coord(1/5)
    
    Abstract
    Purpose The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders - standards creators and standards users - operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles. Design/methodology/approach The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka's given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka's guiding ends (Omeka's actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does). Findings The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka's mission, the platform's lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation. Originality/value This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka's position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to "adopt a standard" is different from the way that standards users (Omeka) understand what it means to "adopt a standard."
  19. Taniguchi, S.: Understanding RDA as a DC application profile (2013) 0.00
    0.0031002287 = product of:
      0.015501143 = sum of:
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 1906) [ClassicSimilarity], result of:
              0.04650343 = score(doc=1906,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 1906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1906)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 5.2015 19:05:09
  20. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.00
    0.0030723398 = product of:
      0.015361699 = sum of:
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
              0.046085097 = score(doc=4826,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4826)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    18. 1.2019 19:13:22

Authors

Languages

  • e 53
  • d 6

Types

  • a 51
  • el 9
  • m 5
  • s 3
  • r 1
  • More… Less…