Search (89 results, page 2 of 5)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Bogaard, T.; Hollink, L.; Wielemaker, J.; Ossenbruggen, J. van; Hardman, L.: Metadata categorization for identifying search patterns in a digital library (2019) 0.04
    0.040034845 = product of:
      0.08006969 = sum of:
        0.060926907 = weight(_text_:digital in 5281) [ClassicSimilarity], result of:
          0.060926907 = score(doc=5281,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 5281, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5281)
        0.01914278 = weight(_text_:library in 5281) [ClassicSimilarity], result of:
          0.01914278 = score(doc=5281,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 5281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5281)
      0.5 = coord(2/4)
    
    Abstract
    Purpose For digital libraries, it is useful to understand how users search in a collection. Investigating search patterns can help them to improve the user interface, collection management and search algorithms. However, search patterns may vary widely in different parts of a collection. The purpose of this paper is to demonstrate how to identify these search patterns within a well-curated historical newspaper collection using the existing metadata. Design/methodology/approach The authors analyzed search logs combined with metadata records describing the content of the collection, using this metadata to create subsets in the logs corresponding to different parts of the collection. Findings The study shows that faceted search is more prevalent than non-faceted search in terms of number of unique queries, time spent, clicks and downloads. Distinct search patterns are observed in different parts of the collection, corresponding to historical periods, geographical regions or subject matter. Originality/value First, this study provides deeper insights into search behavior at a fine granularity in a historical newspaper collection, by the inclusion of the metadata in the analysis. Second, it demonstrates how to use metadata categorization as a way to analyze distinct search patterns in a collection.
  2. Alemu, G.: ¬A theory of metadata enriching and filtering (2016) 0.04
    0.03763327 = product of:
      0.07526654 = sum of:
        0.048741527 = weight(_text_:digital in 5068) [ClassicSimilarity], result of:
          0.048741527 = score(doc=5068,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.2465345 = fieldWeight in 5068, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=5068)
        0.026525015 = weight(_text_:library in 5068) [ClassicSimilarity], result of:
          0.026525015 = score(doc=5068,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20126988 = fieldWeight in 5068, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=5068)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a new theory of metadata enriching and filtering. The theory emerged from a rigorous grounded theory data analysis of 57 in-depth interviews with metadata experts, library and information science researchers, librarians as well as academic library users (G. Alemu, A Theory of Digital Library Metadata: The Emergence of Enriching and Filtering, University of Portsmouth PhD thesis, Portsmouth, 2014). Partly due to the novelty of Web 2.0 approaches and mainly due to the absence of foundational theories to underpin socially constructed metadata approaches, this research adapted a social constructivist philosophical approach and a constructivist grounded theory method (K.?Charmaz, Constructing Grounded Theory: A Practical Guide through Qualitative Analysis, SAGE Publications, London, 2006). The theory espouses the importance of enriching information objects with descriptions pertaining to the about-ness of information objects. Such richness and diversity of descriptions, it is argued, could chiefly be achieved by involving users in the metadata creation process. The theory includes four overarching metadata principles - metadata enriching, linking, openness and filtering. The theory proposes a mixed metadata approach where metadata experts provide the requisite basic descriptive metadata, structure and interoperability (a priori metadata) while users continually enrich it with their own interpretations (post-hoc metadata). Enriched metadata is inter- and cross-linked (the principle of linking), made openly accessible (the principle of openness) and presented (the principle of filtering) according to user needs. It is argued that enriched, interlinked and open metadata effectively rises and scales to the challenges presented by the growing digital collections and changing user expectations. This metadata approach allows users to pro-actively engage in co-creating metadata, hence enhancing the findability, discoverability and subsequent usage of information resources. This paper concludes by indicating the current challenges and opportunities to implement the theory of metadata enriching and filtering.
  3. Raja, N.A.: Digitized content and index pages as alternative subject access fields (2012) 0.04
    0.037334766 = product of:
      0.07466953 = sum of:
        0.051698197 = weight(_text_:digital in 870) [ClassicSimilarity], result of:
          0.051698197 = score(doc=870,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=870)
        0.022971334 = weight(_text_:library in 870) [ClassicSimilarity], result of:
          0.022971334 = score(doc=870,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=870)
      0.5 = coord(2/4)
    
    Abstract
    This article describes a pilot study undertaken to test the benefits of the digitized Content and Index pages of books and content pages of journal Issues in providing subject access to documents in a collection. A partial digitization strategy is used to fossick specific information using the alternative subject access fields in bibliographic records. A pilot study was carried out to search for books and journal articles containing information on "Leadership., "Women Entrepreneurs., "Disinvestment. and "Digital preservation. through normal procedu re and based on information stored in MARC 21 fields 653, 505 and 520 of the bibliographic records in the University of Mumbai Library. The results are compared to draw the conclusions.
  4. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.04
    0.0350769 = product of:
      0.0701538 = sum of:
        0.043081827 = weight(_text_:digital in 4048) [ClassicSimilarity], result of:
          0.043081827 = score(doc=4048,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.027071979 = weight(_text_:library in 4048) [ClassicSimilarity], result of:
          0.027071979 = score(doc=4048,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 4048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.5 = coord(2/4)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  5. DeZelar-Tiedman, C.: Exploring user-contributed metadata's potential to enhance access to literary works (2011) 0.03
    0.033157483 = product of:
      0.066314965 = sum of:
        0.045942668 = weight(_text_:library in 2595) [ClassicSimilarity], result of:
          0.045942668 = score(doc=2595,freq=8.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.34860963 = fieldWeight in 2595, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=2595)
        0.0203723 = product of:
          0.0407446 = sum of:
            0.0407446 = weight(_text_:22 in 2595) [ClassicSimilarity], result of:
              0.0407446 = score(doc=2595,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23214069 = fieldWeight in 2595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2595)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Academic libraries have moved toward providing social networking features, such as tagging, in their library catalogs. To explore whether user tags can enhance access to individual literary works, the author obtained a sample of individual works of English and American literature from the twentieth and twenty-first centuries from a large academic library catalog and searched them in LibraryThing. The author compared match rates, the availability of subject headings and tags across various literary forms, and the terminology used in tags versus controlled-vocabulary headings on a subset of records. In addition, she evaluated the usefulness of available LibraryThing tags for the library catalog records that lacked subject headings. Options for utilizing the subject terms available in sources outside the local catalog also are discussed.
    Date
    10. 9.2000 17:38:22
    Source
    Library resources and technical services. 55(2011) no.4, S.221-233
  6. Zavalina, O.L.: Complementarity in subject metadata in large-scale digital libraries : a comparative analysis (2014) 0.03
    0.03165855 = product of:
      0.1266342 = sum of:
        0.1266342 = weight(_text_:digital in 1972) [ClassicSimilarity], result of:
          0.1266342 = score(doc=1972,freq=12.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6405154 = fieldWeight in 1972, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1972)
      0.25 = coord(1/4)
    
    Abstract
    Provision of high-quality subject metadata is crucial for organizing adequate subject access to rich content aggregated by digital libraries. A number of large-scale digital libraries worldwide are now generating subject metadata to describe not only individual objects but entire digital collections as an integral whole. However, little research to date has been conducted to empirically evaluate the quality of this collection-level subject metadata. The study presented in this article compares free-text and controlled-vocabulary collection-level subject metadata in three large-scale cultural heritage digital libraries in the United States and the European Union. As revealed by this study, the emerging best practices for creating rich collection-level subject metadata includes describing a collection's subject matter with mutually complementary data values in controlled-vocabulary and free-text subject metadata elements. Three kinds of complementarity were observed in this study: one-way complementarity, two-way complementarity, and multiple complementarity.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  7. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.03
    0.031074919 = product of:
      0.062149838 = sum of:
        0.034243647 = weight(_text_:library in 3965) [ClassicSimilarity], result of:
          0.034243647 = score(doc=3965,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25983828 = fieldWeight in 3965, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.027906192 = product of:
          0.055812385 = sum of:
            0.055812385 = weight(_text_:project in 3965) [ClassicSimilarity], result of:
              0.055812385 = score(doc=3965,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.26381132 = fieldWeight in 3965, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3965)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
    The paper discusses the importance of these initiatives in releasing as linked data the very large quantities of rich, professionally-generated metadata stored in formats based on these standards, such as UNIMARC and MARC21, addressing such issues as critical mass for semantic and statistical inferencing, integration with user- and machine-generated metadata, and authenticity, veracity and trust. The paper also discusses related initiatives to release controlled vocabularies, including the Dewey Decimal Classification (DDC), ISBD, Library of Congress Name Authority File (LCNAF), Library of Congress Subject Headings (LCSH), Rameau (French subject headings), Universal Decimal Classification (UDC), and the Virtual International Authority File (VIAF) as linked data. Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  8. Hodges, D.W.; Schlottmann, K.: better archival migration outcomes with Python and the Google Sheets API : Reporting from the archives (2019) 0.03
    0.03093262 = product of:
      0.06186524 = sum of:
        0.01914278 = weight(_text_:library in 5444) [ClassicSimilarity], result of:
          0.01914278 = score(doc=5444,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 5444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5444)
        0.04272246 = product of:
          0.08544492 = sum of:
            0.08544492 = weight(_text_:project in 5444) [ClassicSimilarity], result of:
              0.08544492 = score(doc=5444,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.40387696 = fieldWeight in 5444, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5444)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Columbia University Libraries recently embarked on a multi-phase project to migrate nearly 4,000 records describing over 70,000 linear feet of archival material from disparate sources and formats into ArchivesSpace. This paper discusses tools and methods brought to bear in Phase 2 of this project, which required us to look closely at how to integrate a large number of legacy finding aids into the new system and merge descriptive data that had diverged in myriad ways. Using Python, XSLT, and a widely available if underappreciated resource-the Google Sheets API-archival and technical library staff devised ways to efficiently report data from different sources, and present it in an accessible, user-friendly way,. Responses were then fed back into automated data remediation processes to keep the migration project on track and minimize manual intervention. The scripts and processes developed proved very effective, and moreover, show promise well beyond the ArchivesSpace migration. This paper describes the Python/XSLT/Sheets API processes developed and how they opened a path to move beyond CSV-based reporting with flexible, ad-hoc data interfaces easily adaptable to meet a variety of purposes.
  9. Wartburg, K. von; Sibille, C.; Aliverti, C.: Metadata collaboration between the Swiss National Library and research institutions in the field of Swiss historiography (2019) 0.03
    0.026429337 = product of:
      0.052858673 = sum of:
        0.032486375 = weight(_text_:library in 5272) [ClassicSimilarity], result of:
          0.032486375 = score(doc=5272,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.24650425 = fieldWeight in 5272, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=5272)
        0.0203723 = product of:
          0.0407446 = sum of:
            0.0407446 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.0407446 = score(doc=5272,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23214069 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article presents examples of metadata collaborations between the Swiss National Library (NL) and research institutions in the field of Swiss historiography. The NL publishes the Bibliography on Swiss History (BSH). In order to meet the demands of its research community, the NL has improved the accessibility and interoperability of the BSH database. Moreover, the BSH takes part in metadata projects such as Metagrid, a web service linking different historical databases. Other metadata collaborations with partners in the historical field such as the Law Sources Foundation (LSF) will position the BSH as an indispensable literature hub for publications on Swiss history.
    Date
    30. 5.2019 19:22:49
  10. Tallerås, K.; Massey, D.; Husevåg, A.-S.R.; Preminger, M.; Pharo, N.: Evaluating (linked) metadata transformations across cultural heritage domains (2014) 0.03
    0.02628516 = product of:
      0.05257032 = sum of:
        0.022971334 = weight(_text_:library in 1588) [ClassicSimilarity], result of:
          0.022971334 = score(doc=1588,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 1588, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1588)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 1588) [ClassicSimilarity], result of:
              0.059197973 = score(doc=1588,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 1588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1588)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes an approach to the evaluation of different aspects in the transformation of existing metadata into Linked data-compliant knowledge bases. At Oslo and Akershus University College of Applied Science, in the TORCH project, we are working on three different experimental case studies on extraction and mapping of broadcasting data and the interlinking of these with transformed library data. The case studies are investigating problems of heterogeneity and ambiguity in and between the domains, as well as problems arising in the interlinking process. The proposed approach makes it possible to collaborate on evaluation across different experiments, and to rationalize and streamline the process.
  11. Ruhl, M.: Do we need metadata? : an on-line survey in German archives (2012) 0.03
    0.025849098 = product of:
      0.10339639 = sum of:
        0.10339639 = weight(_text_:digital in 471) [ClassicSimilarity], result of:
          0.10339639 = score(doc=471,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.52297866 = fieldWeight in 471, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=471)
      0.25 = coord(1/4)
    
    Abstract
    The paper summarizes the results of an on-line survey which was executed 2010 in german archives of all branches. The survey focused on metadata and used metadata standards for the annotation of audiovisual media like pictures, audio and video files (analog and digital). The findings motivate the question whether archives are able to collaborate in projects like europeana if they do not use accepted standards for their orientation. Archives need more resources and archival staff need more training to execute more complex tasks in an digital and semantic surrounding.
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  12. Mayernik, M.S.; Acker, A.: Tracing the traces : the critical role of metadata within networked communications (2018) 0.03
    0.025849098 = product of:
      0.10339639 = sum of:
        0.10339639 = weight(_text_:digital in 4013) [ClassicSimilarity], result of:
          0.10339639 = score(doc=4013,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.52297866 = fieldWeight in 4013, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=4013)
      0.25 = coord(1/4)
    
    Abstract
    The information sciences have traditionally been at the center of metadata-focused research. The US National Security Agency (NSA) intelligence documents revealed by Edward Snowden in June of 2013 brought the term "metadata" into the public consciousness. Surprisingly little discussion in the information sciences has since occurred on the nature and importance of metadata within networked communication systems. The collection of digital metadata impacts the ways that people experience social and technical communication. Without such metadata, networked communication cannot exist. The NSA leaks, and numerous recent hacks of corporate and government communications, point to metadata as objects of new scholarly inquiry. If we are to engage in meaningful discussions about our digital traces, or make informed decisions about new policies and technologies, it is essential to develop theoretical and empirical frameworks that account for digital metadata. This opinion paper presents 5 key sociotechnical characteristics of metadata within digital networks that would benefit from stronger engagement by the information sciences.
  13. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.03
    0.025283787 = product of:
      0.050567575 = sum of:
        0.026799891 = weight(_text_:library in 2606) [ClassicSimilarity], result of:
          0.026799891 = score(doc=2606,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 2606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.047535364 = score(doc=2606,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    10. 9.2000 17:38:22
    Source
    Library resources and technical services. 58(2014) no.3, S.187-208
  14. Maron, D.; Feinberg, M.: What does it mean to adopt a metadata standard? : a case study of Omeka and the Dublin Core (2018) 0.02
    0.024889842 = product of:
      0.049779683 = sum of:
        0.034465462 = weight(_text_:digital in 4248) [ClassicSimilarity], result of:
          0.034465462 = score(doc=4248,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=4248)
        0.015314223 = weight(_text_:library in 4248) [ClassicSimilarity], result of:
          0.015314223 = score(doc=4248,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 4248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=4248)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders - standards creators and standards users - operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles. Design/methodology/approach The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka's given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka's guiding ends (Omeka's actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does). Findings The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka's mission, the platform's lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation. Originality/value This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka's position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to "adopt a standard" is different from the way that standards users (Omeka) understand what it means to "adopt a standard."
  15. Handbook of metadata, semantics and ontologies (2014) 0.02
    0.024889842 = product of:
      0.049779683 = sum of:
        0.034465462 = weight(_text_:digital in 5134) [ClassicSimilarity], result of:
          0.034465462 = score(doc=5134,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 5134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.015314223 = weight(_text_:library in 5134) [ClassicSimilarity], result of:
          0.015314223 = score(doc=5134,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 5134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
      0.5 = coord(2/4)
    
    Abstract
    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and their interrelations, and large databases with biological information use complex and detailed metadata schemas for more precise and informed search strategies. There is a wide diversity in the languages and idioms used for providing meta-descriptions, from simple structured text in metadata schemas to formal annotations using ontologies, and the technologies for storing, sharing and exploiting meta-descriptions are also diverse and evolve rapidly. In addition, there is a proliferation of schemas and standards related to metadata, resulting in a complex and moving technological landscape - hence, the need for specialized knowledge and skills in this area. The Handbook of Metadata, Semantics and Ontologies is intended as an authoritative reference for students, practitioners and researchers, serving as a roadmap for the variety of metadata schemas and ontologies available in a number of key domain areas, including culture, biology, education, healthcare, engineering and library science.
    Signature
    Digital
  16. Hooland, S. van; Verborgh, R.: Linked data for Lilibraries, archives and museums : how to clean, link, and publish your metadata (2014) 0.02
    0.024889842 = product of:
      0.049779683 = sum of:
        0.034465462 = weight(_text_:digital in 5153) [ClassicSimilarity], result of:
          0.034465462 = score(doc=5153,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 5153, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=5153)
        0.015314223 = weight(_text_:library in 5153) [ClassicSimilarity], result of:
          0.015314223 = score(doc=5153,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 5153, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=5153)
      0.5 = coord(2/4)
    
    Abstract
    This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Libraries, archives and museums are facing up to the challenge of providing access to fast growing collections whilst managing cuts to budgets. Key to this is the creation, linking and publishing of good quality metadata as Linked Data that will allow their collections to be discovered, accessed and disseminated in a sustainable manner. This highly practical handbook teaches you how to unlock the value of your existing metadata through cleaning, reconciliation, enrichment and linking and how to streamline the process of new metadata creation. Metadata experts Seth van Hooland and Ruben Verborgh introduce the key concepts of metadata standards and Linked Data and how they can be practically applied to existing metadata, giving readers the tools and understanding to achieve maximum results with limited resources. Readers will learn how to critically assess and use (semi-)automated methods of managing metadata through hands-on exercises within the book and on the accompanying website. Each chapter is built around a case study from institutions around the world, demonstrating how freely available tools are being successfully used in different metadata contexts. This handbook delivers the necessary conceptual and practical understanding to empower practitioners to make the right decisions when making their organisations resources accessible on the Web. Key topics include, the value of metadata; metadata creation - architecture, data models and standards; metadata cleaning; metadata reconciliation; metadata enrichment through Linked Data and named-entity recognition; importing and exporting metadata; ensuring a sustainable publishing model. This will be an invaluable guide for metadata practitioners and researchers within all cultural heritage contexts, from library cataloguers and archivists to museum curatorial staff. It will also be of interest to students and academics within information science and digital humanities fields. IT managers with responsibility for information systems, as well as strategy heads and budget holders, at cultural heritage organisations, will find this a valuable decision-making aid.
  17. Salaba, A.; Tennis, J.T.: Solid foundations and some secondary assumptions in the design of bibliographic metadata : toward a typology of complementary uses of metadata (2018) 0.02
    0.024370763 = product of:
      0.097483054 = sum of:
        0.097483054 = weight(_text_:digital in 4779) [ClassicSimilarity], result of:
          0.097483054 = score(doc=4779,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 4779, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=4779)
      0.25 = coord(1/4)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  18. Simionato, A.C.; Arakaki, F.A.; Costa Santos, P.L.V.A. da: Integrating libraries, archives, museums and art galleries with Linked Data : initiatives study (2018) 0.02
    0.024370763 = product of:
      0.097483054 = sum of:
        0.097483054 = weight(_text_:digital in 4807) [ClassicSimilarity], result of:
          0.097483054 = score(doc=4807,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 4807, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=4807)
      0.25 = coord(1/4)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  19. Pattuelli, M.C.: From uniform identifiers to graphs, from individuals to communities : what we talk about when we talk about linked person data (2018) 0.02
    0.024370763 = product of:
      0.097483054 = sum of:
        0.097483054 = weight(_text_:digital in 4816) [ClassicSimilarity], result of:
          0.097483054 = score(doc=4816,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 4816, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=4816)
      0.25 = coord(1/4)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  20. Adland, M.K.; Lykke, M.: Tags on healthcare information websites (2018) 0.02
    0.024370763 = product of:
      0.097483054 = sum of:
        0.097483054 = weight(_text_:digital in 4823) [ClassicSimilarity], result of:
          0.097483054 = score(doc=4823,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 4823, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=4823)
      0.25 = coord(1/4)
    
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira

Authors

Languages

  • e 85
  • d 4
  • More… Less…

Types

Subjects