Search (50 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.07
    0.072361186 = product of:
      0.14472237 = sum of:
        0.11292135 = weight(_text_:term in 3280) [ClassicSimilarity], result of:
          0.11292135 = score(doc=3280,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.5155283 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.078125 = fieldNorm(doc=3280)
        0.031801023 = product of:
          0.063602045 = sum of:
            0.063602045 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.063602045 = score(doc=3280,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  2. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.03
    0.032109328 = product of:
      0.064218655 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 2192) [ClassicSimilarity], result of:
              0.033293735 = score(doc=2192,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.25 = coord(1/4)
        0.055895224 = product of:
          0.11179045 = sum of:
            0.11179045 = weight(_text_:assessment in 2192) [ClassicSimilarity], result of:
              0.11179045 = score(doc=2192,freq=4.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.43132967 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
  3. Stiller, J.; Olensky, M.; Petras, V.: ¬A framework for the evaluation of automatic metadata enrichments (2014) 0.03
    0.031474918 = product of:
      0.12589967 = sum of:
        0.12589967 = weight(_text_:frequency in 1587) [ClassicSimilarity], result of:
          0.12589967 = score(doc=1587,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.45543438 = fieldWeight in 1587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1587)
      0.25 = coord(1/4)
    
    Abstract
    Automatic enrichment of collections connects data to vocabularies, which supports the contextualization of content and adds searchable text to metadata. The paper introduces a framework of four dimensions (frequency, coverage, relevance and error rate) that measure both the suitability of the enrichment for the object and the enrichments' contribution to search success. To verify the framework, it is applied to the evaluation of automatic enrichments in the digital library Europeana. The analysis of 100 result sets and their corresponding queries (1,121 documents total) shows the framework is a valuable tool for guiding enrichments and determining the value of enrichment efforts.
  4. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.03
    0.031173116 = product of:
      0.06234623 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 4048) [ClassicSimilarity], result of:
              0.023542227 = score(doc=4048,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 4048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4048)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 4048) [ClassicSimilarity], result of:
          0.056460675 = score(doc=4048,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  5. Kopácsi, S.; Hudak, R.; Ganguly, R.: Implementation of a classification server to support metadata organization for long term preservation systems (2017) 0.02
    0.019761236 = product of:
      0.079044946 = sum of:
        0.079044946 = weight(_text_:term in 3915) [ClassicSimilarity], result of:
          0.079044946 = score(doc=3915,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 3915, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3915)
      0.25 = coord(1/4)
    
  6. Maron, D.; Feinberg, M.: What does it mean to adopt a metadata standard? : a case study of Omeka and the Dublin Core (2018) 0.02
    0.01816378 = product of:
      0.03632756 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 4248) [ClassicSimilarity], result of:
              0.018833783 = score(doc=4248,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 4248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4248)
          0.25 = coord(1/4)
        0.031619113 = product of:
          0.063238226 = sum of:
            0.063238226 = weight(_text_:assessment in 4248) [ClassicSimilarity], result of:
              0.063238226 = score(doc=4248,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2439969 = fieldWeight in 4248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4248)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders - standards creators and standards users - operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles. Design/methodology/approach The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka's given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka's guiding ends (Omeka's actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does). Findings The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka's mission, the platform's lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation. Originality/value This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka's position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to "adopt a standard" is different from the way that standards users (Omeka) understand what it means to "adopt a standard."
  7. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.02
    0.017428853 = product of:
      0.034857705 = sum of:
        0.009416891 = product of:
          0.037667565 = sum of:
            0.037667565 = weight(_text_:based in 1953) [ClassicSimilarity], result of:
              0.037667565 = score(doc=1953,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.26631355 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.25 = coord(1/4)
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.05088163 = score(doc=1953,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Scientific repositories create a new environment for studying traditional information science issues. The interaction between indexing terms provided by users and controlled vocabularies continues to be an area of debate and study. This article reports and analyzes findings from a study that mapped the relationships between free text keywords and controlled vocabulary terms used in the sciences. Based on this study's findings recommendations are made about which vocabularies may be better to use in scientific data repositories.
    Date
    29. 5.2015 19:09:22
  8. Mayernik, M.S.; Acker, A.: Tracing the traces : the critical role of metadata within networked communications (2018) 0.02
    0.016938202 = product of:
      0.06775281 = sum of:
        0.06775281 = weight(_text_:term in 4013) [ClassicSimilarity], result of:
          0.06775281 = score(doc=4013,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.309317 = fieldWeight in 4013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4013)
      0.25 = coord(1/4)
    
    Abstract
    The information sciences have traditionally been at the center of metadata-focused research. The US National Security Agency (NSA) intelligence documents revealed by Edward Snowden in June of 2013 brought the term "metadata" into the public consciousness. Surprisingly little discussion in the information sciences has since occurred on the nature and importance of metadata within networked communication systems. The collection of digital metadata impacts the ways that people experience social and technical communication. Without such metadata, networked communication cannot exist. The NSA leaks, and numerous recent hacks of corporate and government communications, point to metadata as objects of new scholarly inquiry. If we are to engage in meaningful discussions about our digital traces, or make informed decisions about new policies and technologies, it is essential to develop theoretical and empirical frameworks that account for digital metadata. This opinion paper presents 5 key sociotechnical characteristics of metadata within digital networks that would benefit from stronger engagement by the information sciences.
  9. Baker, T.: Dublin Core Application Profiles : current approaches (2010) 0.01
    0.014534365 = product of:
      0.02906873 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 3737) [ClassicSimilarity], result of:
              0.039952483 = score(doc=3737,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 3737, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3737)
          0.25 = coord(1/4)
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 3737) [ClassicSimilarity], result of:
              0.038161222 = score(doc=3737,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 3737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3737)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Dublin Core Metadata Initiative currently defines a Dublin Core Application Profile as a set of specifications about the metadata design of a particular application or for a particular domain or community of users. The current approach to application profiles is summarized in the Singapore Framework for Application Profiles [SINGAPORE-FRAMEWORK] (see Figure 1). While the approach originally developed as a means of specifying customized applications based on the fifteen elements of the Dublin Core Element Set (e.g., Title, Date, Subject), it has evolved into a generic approach to creating metadata that meets specific local requirements while integrating coherently with other RDF-based metadata.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  10. Syn, S.Y.; Spring, M.B.: Finding subject terms for classificatory metadata from user-generated social tags (2013) 0.01
    0.014115169 = product of:
      0.056460675 = sum of:
        0.056460675 = weight(_text_:term in 745) [ClassicSimilarity], result of:
          0.056460675 = score(doc=745,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 745, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=745)
      0.25 = coord(1/4)
    
    Abstract
    With the increasing popularity of social tagging systems, the potential for using social tags as a source of metadata is being explored. Social tagging systems can simplify the involvement of a large number of users and improve the metadata-generation process. Current research is exploring social tagging systems as a mechanism to allow nonprofessional catalogers to participate in metadata generation. Because social tags are not from controlled vocabularies, there are issues that have to be addressed in finding quality terms to represent the content of a resource. This research explores ways to obtain a set of tags representing the resource from the tags provided by users. Two metrics are introduced. Annotation Dominance (AD) is a measure of the extent to which a tag term is agreed to by users. Cross Resources Annotation Discrimination (CRAD) is a measure of a tag's potential to classify a collection. It is designed to remove tags that are used too broadly or narrowly. Using the proposed measurements, the research selects important tags (meta-terms) and removes meaningless ones (tag noise) from the tags provided by users. To evaluate the proposed approach to find classificatory metadata candidates, we rely on expert users' relevance judgments comparing suggested tag terms and expert metadata terms. The results suggest that processing of user tags using the two measurements successfully identifies the terms that represent the topic categories of web resource content. The suggested tag terms can be further examined in various usages as semantic metadata for the resources.
  11. Kleeck, D. Van; Langford, G.; Lundgren, J.; Nakano, H.; O'Dell, A.J.; Shelton, T.: Managing bibliographic data quality in a consortial academic library : a case study (2016) 0.01
    0.013833362 = product of:
      0.055333447 = sum of:
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 5133) [ClassicSimilarity], result of:
              0.11066689 = score(doc=5133,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 5133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5133)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This article presents a case study of quality management for print and electronic resource metadata, summarizing problems and solutions encountered by the Cataloging and Discovery Services Department in the George A. Smathers Libraries at the University of Florida. The authors discuss national, state, and local standards for cataloging, automated and manual record enhancements for data, user feedback, and statewide consortial factors. Findings show that adherence to standards, proactive cleanup of data via manual processes and automated tools, collaboration with vendors and stakeholders, and continual assessment of workflows are key to the management of biblio-graphic data quality in consortial academic libraries.
  12. Häusner, E.-M.; Sommerland, Y.: Assessment of metadata quality of the Swedish National Bibliography through mapping user awareness (2018) 0.01
    0.013833362 = product of:
      0.055333447 = sum of:
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 5169) [ClassicSimilarity], result of:
              0.11066689 = score(doc=5169,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 5169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5169)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  13. Chou, C.: Purpose-driven assessment of cataloging and metadata services : transforming broken links into linked data (2019) 0.01
    0.013833362 = product of:
      0.055333447 = sum of:
        0.055333447 = product of:
          0.11066689 = sum of:
            0.11066689 = weight(_text_:assessment in 5280) [ClassicSimilarity], result of:
              0.11066689 = score(doc=5280,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.4269946 = fieldWeight in 5280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5280)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  14. Mi, X.M.; Pollock, B.M.: Metadata schema to facilitate linked data for 3D digital models of cultural heritage collections : a University of South Florida Libraries case study (2018) 0.01
    0.011857167 = product of:
      0.047428668 = sum of:
        0.047428668 = product of:
          0.094857335 = sum of:
            0.094857335 = weight(_text_:assessment in 5171) [ClassicSimilarity], result of:
              0.094857335 = score(doc=5171,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.36599535 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5171)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The University of South Florida Libraries house and provide access to a collection of cultural heritage and 3D digital models. In an effort to provide greater access to these collections, a linked data project has been implemented. A metadata schema for the 3D cultural heritage objects which uses linked data is an excellent way to share these collections with other repositories, thus gaining global exposure and access to these valuable resources. This article will share the process of building the 3D cultural heritage metadata model as well as an assessment of the model and recommendations for future linked data projects.
  15. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.01
    0.011292135 = product of:
      0.04516854 = sum of:
        0.04516854 = weight(_text_:term in 1076) [ClassicSimilarity], result of:
          0.04516854 = score(doc=1076,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
      0.25 = coord(1/4)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
  16. Pomerantz, J.: Metadata (2015) 0.01
    0.011292135 = product of:
      0.04516854 = sum of:
        0.04516854 = weight(_text_:term in 3800) [ClassicSimilarity], result of:
          0.04516854 = score(doc=3800,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.20621133 = fieldWeight in 3800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.03125 = fieldNorm(doc=3800)
      0.25 = coord(1/4)
    
    Abstract
    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades into the background; everyone (except perhaps the NSA) takes it for granted. Pomerantz explains what metadata is, and why it exists. He distinguishes among different types of metadata -- descriptive, administrative, structural, preservation, and use -- and examines different users and uses of each type. He discusses the technologies that make modern metadata possible, and he speculates about metadata's future. By the end of the book, readers will see metadata everywhere. Because, Pomerantz warns us, it's metadata's world, and we are just living in it.
  17. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.01
    0.007950256 = product of:
      0.031801023 = sum of:
        0.031801023 = product of:
          0.063602045 = sum of:
            0.063602045 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.063602045 = score(doc=3281,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.38690117 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  18. Mora-Mcginity, M. et al.: MusicWeb: music discovery with open linked semantic metadata (2016) 0.01
    0.007950256 = product of:
      0.031801023 = sum of:
        0.031801023 = product of:
          0.063602045 = sum of:
            0.063602045 = weight(_text_:22 in 3282) [ClassicSimilarity], result of:
              0.063602045 = score(doc=3282,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.38690117 = fieldWeight in 3282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3282)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  19. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.01
    0.006360204 = product of:
      0.025440816 = sum of:
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
              0.05088163 = score(doc=4826,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    18. 1.2019 19:13:22
  20. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.01
    0.0055651786 = product of:
      0.022260714 = sum of:
        0.022260714 = product of:
          0.04452143 = sum of:
            0.04452143 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.04452143 = score(doc=2606,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10. 9.2000 17:38:22

Types

  • a 44
  • el 7
  • m 4
  • s 3
  • More… Less…