Search (12 results, page 1 of 1)

  • × theme_ss:"Metadaten"
  • × year_i:[2020 TO 2030}
  1. Sewing, S.: Bestandserhaltung und Archivierung : Koordinierung auf der Basis eines gemeinsamen Metadatenformates in den deutschen und österreichischen Bibliotheksverbünden (2021) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 266) [ClassicSimilarity], result of:
          0.031038022 = score(doc=266,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=266)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 266) [ClassicSimilarity], result of:
              0.038066804 = score(doc=266,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 266, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=266)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 5.2021 12:43:05
    Source
    Open Password. 2021, Nr.928 vom 31.05.2021 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzI5OSwiMjc2N2ZlZjQwMDUwIiwwLDAsMjY4LDFd]
  2. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.02
    0.023530604 = product of:
      0.04706121 = sum of:
        0.02586502 = weight(_text_:data in 1042) [ClassicSimilarity], result of:
          0.02586502 = score(doc=1042,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 1042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1042)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 1042) [ClassicSimilarity], result of:
              0.042392377 = score(doc=1042,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 1042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1042)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Natural language processing techniques can be used to analyze the linguistic content of a document to extract missing pieces of metadata. However, accurate metadata extraction may not depend solely on the linguistics, but also on structural problems such as extremely large documents, unordered multi-file documents, and inconsistency in manually labeled metadata. In this work, we start from two standard machine learning solutions to extract pieces of metadata from Environmental Impact Statements, environmental policy documents that are regularly produced under the US National Environmental Policy Act of 1969. We present a series of experiments where we evaluate how these standard approaches are affected by different issues derived from real-world data. We find that metadata extraction can be strongly influenced by nonlinguistic factors such as document length and volume ordering and that the standard machine learning solutions often do not scale well to long documents. We demonstrate how such solutions can be better adapted to these scenarios, and conclude with suggestions for other NLP practitioners cataloging large document collections.
  3. Wu, M.; Liu, Y.-H.; Brownlee, R.; Zhang, X.: Evaluating utility and automatic classification of subject metadata from Research Data Australia (2021) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 453) [ClassicSimilarity], result of:
          0.08778879 = score(doc=453,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 453, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=453)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, we present a case study of how well subject metadata (comprising headings from an international classification scheme) has been deployed in a national data catalogue, and how often data seekers use subject metadata when searching for data. Through an analysis of user search behaviour as recorded in search logs, we find evidence that users utilise the subject metadata for data discovery. Since approximately half of the records ingested by the catalogue did not include subject metadata at the time of harvest, we experimented with automatic subject classification approaches in order to enrich these records and to provide additional support for user search and data discovery. Our results show that automatic methods work well for well represented categories of subject metadata, and these categories tend to have features that can distinguish themselves from the other categories. Our findings raise implications for data catalogue providers; they should invest more effort to enhance the quality of data records by providing an adequate description of these records for under-represented subject categories.
  4. Koho, M.; Burrows, T.; Hyvönen, E.; Ikkala, E.; Page, K.; Ransom, L.; Tuominen, J.; Emery, D.; Fraas, M.; Heller, B.; Lewis, D.; Morrison, A.; Porte, G.; Thomson, E.; Velios, A.; Wijsman, H.: Harmonizing and publishing heterogeneous premodern manuscript metadata as Linked Open Data (2022) 0.02
    0.01828933 = product of:
      0.07315732 = sum of:
        0.07315732 = weight(_text_:data in 466) [ClassicSimilarity], result of:
          0.07315732 = score(doc=466,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.49407038 = fieldWeight in 466, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=466)
      0.25 = coord(1/4)
    
    Abstract
    Manuscripts are a crucial form of evidence for research into all aspects of premodern European history and culture, and there are numerous databases devoted to describing them in detail. This descriptive information, however, is typically available only in separate data silos based on incompatible data models and user interfaces. As a result, it has been difficult to study manuscripts comprehensively across these various platforms. To address this challenge, a team of manuscript scholars and computer scientists worked to create "Mapping Manuscript Migrations" (MMM), a semantic portal, and a Linked Open Data service. MMM stands as a successful proof of concept for integrating distinct manuscript datasets into a shared platform for research and discovery with the potential for future expansion. This paper will discuss the major products of the MMM project: a unified data model, a repeatable data transformation pipeline, a Linked Open Data knowledge graph, and a Semantic Web portal. It will also examine the crucial importance of an iterative process of multidisciplinary collaboration embedded throughout the project, enabling humanities researchers to shape the development of a digital platform and tools, while also enabling the same researchers to ask more sophisticated and comprehensive research questions of the aggregated data.
  5. Hansson, K.; Dahlgren, A.: Open research data repositories : practices, norms, and metadata for sharing images (2022) 0.02
    0.015839024 = product of:
      0.063356094 = sum of:
        0.063356094 = weight(_text_:data in 472) [ClassicSimilarity], result of:
          0.063356094 = score(doc=472,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4278775 = fieldWeight in 472, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=472)
      0.25 = coord(1/4)
    
    Abstract
    Open research data repositories are promoted as one of the cornerstones in the open research paradigm, promoting collaboration, interoperability, and large-scale sharing and reuse. There is, however, a lack of research investigating what these sharing platforms actually share and a more critical interface analysis of the norms and practices embedded in this datafication of academic practice is needed. This article takes image data sharing in the humanities as a case study for investigating the possibilities and constraints in 5 open research data repositories. By analyzing the visual and textual content of the interface along with the technical means for metadata, the study shows how the platforms are differentiated in terms of signifiers of research paradigms, but that beneath the rhetoric of the interface, they are designed in a similar way, which does not correspond well with the image researchers' need for detailed metadata. Combined with the problem of copyright limitations, these data-sharing tools are simply not sophisticated enough when it comes to sharing and reusing images. The result also corresponds with previous research showing that these tools are used not so much for sharing research data, but more for promoting researcher personas.
  6. Baroncini, S.; Sartini, B.; Erp, M. Van; Tomasi, F.; Gangemi, A.: Is dc:subject enough? : A landscape on iconography and iconology statements of knowledge graphs in the semantic web (2023) 0.01
    0.012671219 = product of:
      0.050684877 = sum of:
        0.050684877 = weight(_text_:data in 1030) [ClassicSimilarity], result of:
          0.050684877 = score(doc=1030,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.342302 = fieldWeight in 1030, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1030)
      0.25 = coord(1/4)
    
    Abstract
    In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects. Design/methodology/approach This study's analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians' theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures' suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness. Findings This study's results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity. Originality/value The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study's results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
  7. Assfalg, R.: Metadaten (2023) 0.01
    0.0103460075 = product of:
      0.04138403 = sum of:
        0.04138403 = weight(_text_:data in 787) [ClassicSimilarity], result of:
          0.04138403 = score(doc=787,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=787)
      0.25 = coord(1/4)
    
    Abstract
    Bei der Betrachtung von Datensätzen in relationalen Datenbanksystemen, von Datenmengen im Kontext von Big Data, von Ausprägungen gängiger XML-Anwendungen oder von Referenzdatenbeständen im Bereich Information und Dokumentation (IuD), fällt eine wichtige Gemeinsamkeit auf: Diese Bestände benötigen eine Beschreibung ihrer inneren Struktur. Bei diesen Strukturbeschreibungen handelt es sich also sozusagen um "Daten über Daten", und diese können kurz gefasst auch als Metadaten bezeichnet werden. Hierzu gehören Syntaxelemente und ggf. eine Spezifikation, wie diese Syntaxelemente angewendet werden.
  8. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 5720) [ClassicSimilarity], result of:
          0.036211025 = score(doc=5720,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 5720, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
      0.25 = coord(1/4)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
  9. Hauff-Hartig, S.: "Im Dickicht der Einzelheiten" : Herausforderungen und Lösungen für die Erschließung (2022) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 498) [ClassicSimilarity], result of:
          0.036211025 = score(doc=498,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 498, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=498)
      0.25 = coord(1/4)
    
    Source
    Open Password. 2022, Nr. 1026 vom 07.02.2022 [https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzQwNiwiMTY2ZjQ0NjVkNzJhIiwwLDAsMzY4LDFd]
  10. Heng, G.; Cole, T.W.; Tian, T.(C.); Han, M.-J.: Rethinking authority reconciliation process (2022) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 727) [ClassicSimilarity], result of:
          0.036211025 = score(doc=727,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 727, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=727)
      0.25 = coord(1/4)
    
    Abstract
    Entity identity management and name reconciliation are intrinsic to both Linked Open Data (LOD) and traditional library authority control. Does this mean that LOD sources can facilitate authority control? This Emblematica Online case study examines the utility of five LOD sources for name reconciliation, comparing design differences regarding ontologies, linking models, and entity properties. It explores the challenges of name reconciliation in the LOD environment and provides lessons learned during a semi-automated name reconciliation process. It also briefly discusses the potential values and benefits of LOD authorities to the authority reconciliation process itself and library services in general.
  11. Guerrini, M.: Metadata: the dimension of cataloging in the digital age (2022) 0.01
    0.009052756 = product of:
      0.036211025 = sum of:
        0.036211025 = weight(_text_:data in 735) [ClassicSimilarity], result of:
          0.036211025 = score(doc=735,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=735)
      0.25 = coord(1/4)
    
    Abstract
    Metadata creation is the process of recording metadata, that is data essential to the identification and retrieval of any type of resource, including bibliographic resources. Metadata capable of identifying characteristics of an entity have always existed. However, the triggering event that has rewritten and enhanced their value is the digital revolution. Cataloging is configured as an action of creating metadata. While cataloging produces a catalog, that is a list of records relating to various types of resources, ordered and searchable, according to a defined criterion, the metadata process produces the metadata of the resources.
  12. Zavalin, V.: Exploration of subject and genre representation in bibliographic metadata representing works of fiction for children and young adults (2024) 0.01
    0.0077595054 = product of:
      0.031038022 = sum of:
        0.031038022 = weight(_text_:data in 1152) [ClassicSimilarity], result of:
          0.031038022 = score(doc=1152,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 1152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1152)
      0.25 = coord(1/4)
    
    Abstract
    This study examines subject and genre representation in metadata that describes information resources created for children and young adult audiences. Both quantitative and limited qualitative analyses were applied to the analysis of WorldCat records collected in 2021 and contributed by the Children's and Young Adults' Cataloging Program at the US Library of Congress. This dataset contains records created several years prior to the data collection point and edited by various OCLC member institutions. Findings provide information on the level and patterns of application of these kinds of metadata important for information access, with a focus on the fields, subfields, and controlled vocabularies used. The discussion of results includes a detailed evaluation of genre and subject metadata quality (accuracy, completeness, and consistency).