Search (50 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.09
    0.08661706 = product of:
      0.20788094 = sum of:
        0.029650755 = weight(_text_:web in 4048) [ClassicSimilarity], result of:
          0.029650755 = score(doc=4048,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 4048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.08084996 = weight(_text_:log in 4048) [ClassicSimilarity], result of:
          0.08084996 = score(doc=4048,freq=2.0), product of:
            0.22837062 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.035634913 = queryNorm
            0.3540296 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.029083263 = weight(_text_:world in 4048) [ClassicSimilarity], result of:
          0.029083263 = score(doc=4048,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21233483 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.038646206 = weight(_text_:wide in 4048) [ClassicSimilarity], result of:
          0.038646206 = score(doc=4048,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.24476713 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.029650755 = weight(_text_:web in 4048) [ClassicSimilarity], result of:
          0.029650755 = score(doc=4048,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 4048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.41666666 = coord(5/12)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  2. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.07
    0.07275806 = product of:
      0.21827418 = sum of:
        0.068615906 = weight(_text_:tagging in 3524) [ClassicSimilarity], result of:
          0.068615906 = score(doc=3524,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.326146 = fieldWeight in 3524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.029650755 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.029650755 = score(doc=3524,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.029650755 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.029650755 = score(doc=3524,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.09035677 = sum of:
          0.06621657 = weight(_text_:2.0 in 3524) [ClassicSimilarity], result of:
            0.06621657 = score(doc=3524,freq=2.0), product of:
              0.20667298 = queryWeight, product of:
                5.799733 = idf(docFreq=363, maxDocs=44218)
                0.035634913 = queryNorm
              0.320393 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.799733 = idf(docFreq=363, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
          0.024140194 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
            0.024140194 = score(doc=3524,freq=2.0), product of:
              0.12478739 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.035634913 = queryNorm
              0.19345059 = fieldWeight in 3524, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3524)
      0.33333334 = coord(4/12)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  3. Social tagging in a linked data environment. Edited by Diane Rasmussen Pennington and Louise F. Spiteri. London, UK: Facet Publishing, 2018. 240 pp. £74.95 (paperback). (ISBN 9781783303380) (2019) 0.05
    0.05250161 = product of:
      0.21000645 = sum of:
        0.16807395 = weight(_text_:tagging in 101) [ClassicSimilarity], result of:
          0.16807395 = score(doc=101,freq=12.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.79889125 = fieldWeight in 101, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=101)
        0.02096625 = weight(_text_:web in 101) [ClassicSimilarity], result of:
          0.02096625 = score(doc=101,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=101)
        0.02096625 = weight(_text_:web in 101) [ClassicSimilarity], result of:
          0.02096625 = score(doc=101,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=101)
      0.25 = coord(3/12)
    
    Abstract
    Social tagging, hashtags, and geotags are used across a variety of platforms (Twitter, Facebook, Tumblr, WordPress, Instagram) in different countries and cultures. This book, representing researchers and practitioners across different information professions, explores how social tags can link content across a variety of environments. Most studies of social tagging have tended to focus on applications like library catalogs, blogs, and social bookmarking sites. This book, in setting out a theoretical background and the use of a series of case studies, explores the role of hashtags as a form of linked data?without the complex implementation of RDF and other Semantic Web technologies.
    RSWK
    Linked Data / Social Tagging
    Subject
    Linked Data / Social Tagging
    Theme
    Social tagging
  4. Syn, S.Y.; Spring, M.B.: Finding subject terms for classificatory metadata from user-generated social tags (2013) 0.04
    0.044791076 = product of:
      0.1791643 = sum of:
        0.13723181 = weight(_text_:tagging in 745) [ClassicSimilarity], result of:
          0.13723181 = score(doc=745,freq=8.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.652292 = fieldWeight in 745, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=745)
        0.02096625 = weight(_text_:web in 745) [ClassicSimilarity], result of:
          0.02096625 = score(doc=745,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 745, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=745)
        0.02096625 = weight(_text_:web in 745) [ClassicSimilarity], result of:
          0.02096625 = score(doc=745,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 745, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=745)
      0.25 = coord(3/12)
    
    Abstract
    With the increasing popularity of social tagging systems, the potential for using social tags as a source of metadata is being explored. Social tagging systems can simplify the involvement of a large number of users and improve the metadata-generation process. Current research is exploring social tagging systems as a mechanism to allow nonprofessional catalogers to participate in metadata generation. Because social tags are not from controlled vocabularies, there are issues that have to be addressed in finding quality terms to represent the content of a resource. This research explores ways to obtain a set of tags representing the resource from the tags provided by users. Two metrics are introduced. Annotation Dominance (AD) is a measure of the extent to which a tag term is agreed to by users. Cross Resources Annotation Discrimination (CRAD) is a measure of a tag's potential to classify a collection. It is designed to remove tags that are used too broadly or narrowly. Using the proposed measurements, the research selects important tags (meta-terms) and removes meaningless ones (tag noise) from the tags provided by users. To evaluate the proposed approach to find classificatory metadata candidates, we rely on expert users' relevance judgments comparing suggested tag terms and expert metadata terms. The results suggest that processing of user tags using the two measurements successfully identifies the terms that represent the topic categories of web resource content. The suggested tag terms can be further examined in various usages as semantic metadata for the resources.
    Theme
    Social tagging
  5. Bundza, M.: ¬The choice is yours! : researchers assign subject metadata to their own materials in institutional repositories (2014) 0.04
    0.03869194 = product of:
      0.15476777 = sum of:
        0.096062265 = weight(_text_:tagging in 1968) [ClassicSimilarity], result of:
          0.096062265 = score(doc=1968,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.4566044 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1968)
        0.02935275 = weight(_text_:web in 1968) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1968,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1968)
        0.02935275 = weight(_text_:web in 1968) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1968,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1968)
      0.25 = coord(3/12)
    
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
    Theme
    Social tagging
  6. Handbook of metadata, semantics and ontologies (2014) 0.04
    0.037428986 = product of:
      0.112286955 = sum of:
        0.02905169 = weight(_text_:web in 5134) [ClassicSimilarity], result of:
          0.02905169 = score(doc=5134,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24981049 = fieldWeight in 5134, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.02326661 = weight(_text_:world in 5134) [ClassicSimilarity], result of:
          0.02326661 = score(doc=5134,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 5134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.030916965 = weight(_text_:wide in 5134) [ClassicSimilarity], result of:
          0.030916965 = score(doc=5134,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 5134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.02905169 = weight(_text_:web in 5134) [ClassicSimilarity], result of:
          0.02905169 = score(doc=5134,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24981049 = fieldWeight in 5134, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
      0.33333334 = coord(4/12)
    
    Abstract
    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and their interrelations, and large databases with biological information use complex and detailed metadata schemas for more precise and informed search strategies. There is a wide diversity in the languages and idioms used for providing meta-descriptions, from simple structured text in metadata schemas to formal annotations using ontologies, and the technologies for storing, sharing and exploiting meta-descriptions are also diverse and evolve rapidly. In addition, there is a proliferation of schemas and standards related to metadata, resulting in a complex and moving technological landscape - hence, the need for specialized knowledge and skills in this area. The Handbook of Metadata, Semantics and Ontologies is intended as an authoritative reference for students, practitioners and researchers, serving as a roadmap for the variety of metadata schemas and ontologies available in a number of key domain areas, including culture, biology, education, healthcare, engineering and library science.
    Imprint
    Singapore : World Scientific
  7. Gartner, R.: Metadata : shaping knowledge from antiquity to the semantic web (2016) 0.03
    0.031248735 = product of:
      0.12499494 = sum of:
        0.0419325 = weight(_text_:web in 731) [ClassicSimilarity], result of:
          0.0419325 = score(doc=731,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 731, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.041129943 = weight(_text_:world in 731) [ClassicSimilarity], result of:
          0.041129943 = score(doc=731,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.30028677 = fieldWeight in 731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.0419325 = weight(_text_:web in 731) [ClassicSimilarity], result of:
          0.0419325 = score(doc=731,freq=8.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 731, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
      0.25 = coord(3/12)
    
    Abstract
    This book offers a comprehensive guide to the world of metadata, from its origins in the ancient cities of the Middle East, to the Semantic Web of today. The author takes us on a journey through the centuries-old history of metadata up to the modern world of crowdsourcing and Google, showing how metadata works and what it is made of. The author explores how it has been used ideologically and how it can never be objective. He argues how central it is to human cultures and the way they develop. Metadata: Shaping Knowledge from Antiquity to the Semantic Web is for all readers with an interest in how we humans organize our knowledge and why this is important. It is suitable for those new to the subject as well as those know its basics. It also makes an excellent introduction for students of information science and librarianship.
    Theme
    Semantic Web
  8. Managing metadata in web-scale discovery systems (2016) 0.03
    0.028271887 = product of:
      0.11308755 = sum of:
        0.04108529 = weight(_text_:web in 3336) [ClassicSimilarity], result of:
          0.04108529 = score(doc=3336,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35328537 = fieldWeight in 3336, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
        0.030916965 = weight(_text_:wide in 3336) [ClassicSimilarity], result of:
          0.030916965 = score(doc=3336,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 3336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
        0.04108529 = weight(_text_:web in 3336) [ClassicSimilarity], result of:
          0.04108529 = score(doc=3336,freq=12.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35328537 = fieldWeight in 3336, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
      0.25 = coord(3/12)
    
    Abstract
    This book shows you how to harness the power of linked data and web-scale discovery systems to manage and link widely varied content across your library collection. Libraries are increasingly using web-scale discovery systems to help clients find a wide assortment of library materials, including books, journal articles, special collections, archival collections, videos, music and open access collections. Depending on the library material catalogued, the discovery system might need to negotiate different metadata standards, such as AACR, RDA, RAD, FOAF, VRA Core, METS, MODS, RDF and more. In Managing Metadata in Web-Scale Discovery Systems, editor Louise Spiteri and a range of international experts show you how to: * maximize the effectiveness of web-scale discovery systems * provide a smooth and seamless discovery experience to your users * help users conduct searches that yield relevant results * manage the sheer volume of items to which you can provide access, so your users can actually find what they need * maintain shared records that reflect the needs, languages, and identities of culturally and ethnically varied communities * manage metadata both within, across, and outside, library discovery tools by converting your library metadata to linked open data that all systems can access * manage user generated metadata from external services such as Goodreads and LibraryThing * mine user generated metadata to better serve your users in areas such as collection development or readers' advisory. The book will be essential reading for cataloguers, technical services and systems librarians and library and information science students studying modules on metadata, cataloguing, systems design, data management, and digital libraries. The book will also be of interest to those managing metadata in archives, museums and other cultural heritage institutions.
    Content
    1. Introduction: the landscape of web-scale discovery - Louise Spiteri 2. Sharing metadata across discovery systems - Marshall Breeding, Angela Kroeger and Heather Moulaison Sandy 3. Managing linked open data across discovery systems - Ali Shiri and Danoosh Davoodi 4. Redefining library resources in discovery systems - Christine DeZelar-Tiedman 5. Managing volume in discovery systems - Aaron Tay 6. Managing outsourced metadata in discovery systems - Laurel Tarulli 7. Managing user-generated metadata in discovery systems - Louise Spiteri
  9. Dunsire, G.; Willer, M.: Initiatives to make standard library metadata models and structures available to the Semantic Web (2010) 0.02
    0.024569437 = product of:
      0.09827775 = sum of:
        0.03750557 = weight(_text_:web in 3965) [ClassicSimilarity], result of:
          0.03750557 = score(doc=3965,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.32250395 = fieldWeight in 3965, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.02326661 = weight(_text_:world in 3965) [ClassicSimilarity], result of:
          0.02326661 = score(doc=3965,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 3965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
        0.03750557 = weight(_text_:web in 3965) [ClassicSimilarity], result of:
          0.03750557 = score(doc=3965,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.32250395 = fieldWeight in 3965, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3965)
      0.25 = coord(3/12)
    
    Abstract
    This paper describes recent initiatives to make standard library metadata models and structures available to the Semantic Web, including IFLA standards such as Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD), and International Standard Bibliographic Description (ISBD) along with the infrastructure that supports them. The FRBR Review Group is currently developing representations of FRAD and the entityrelationship model of FRBR in resource description framework (RDF) applications, using a combination of RDF, RDF Schema (RDFS), Simple Knowledge Organisation System (SKOS) and Web Ontology Language (OWL), cross-relating both models where appropriate. The ISBD/XML Task Group is investigating the representation of ISBD in RDF. The IFLA Namespaces project is developing an administrative and technical infrastructure to support such initiatives and encourage uptake of standards by other agencies. The paper describes similar initiatives with related external standards such as RDA - resource description and access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model (CRM). The DCMI RDA Task Group is working with the Joint Steering Committee for RDA to develop Semantic Web representations of RDA structural elements, which are aligned with FRBR and FRAD, and controlled metadata content vocabularies. REICAT is also based on FRBR, and an object-oriented version of FRBR has been integrated with CRM, which itself has an RDF representation. CRM was initially based on the metadata needs of the museum community, and is now seeking extension to the archives community with the eventual aim of developing a model common to the main cultural information domains of archives, libraries and museums. The Vocabulary Mapping Framework (VMF) project has developed a Semantic Web tool to automatically generate mappings between metadata models from the information communities, including publishers. The tool is based on several standards, including CRM, FRAD, FRBR, MARC21 and RDA.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  10. Peters, I.; Stock, W.G.: Power tags in information retrieval (2010) 0.02
    0.022188673 = product of:
      0.08875469 = sum of:
        0.02096625 = weight(_text_:web in 865) [ClassicSimilarity], result of:
          0.02096625 = score(doc=865,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=865)
        0.02096625 = weight(_text_:web in 865) [ClassicSimilarity], result of:
          0.02096625 = score(doc=865,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.18028519 = fieldWeight in 865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=865)
        0.04682219 = product of:
          0.09364438 = sum of:
            0.09364438 = weight(_text_:2.0 in 865) [ClassicSimilarity], result of:
              0.09364438 = score(doc=865,freq=4.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.45310414 = fieldWeight in 865, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=865)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Purpose - Many Web 2.0 services (including Library 2.0 catalogs) make use of folksonomies. The purpose of this paper is to cut off all tags in the long tail of a document-specific tag distribution. The remaining tags at the beginning of a tag distribution are considered power tags and form a new, additional search option in information retrieval systems. Design/methodology/approach - In a theoretical approach the paper discusses document-specific tag distributions (power law and inverse-logistic shape), the development of such distributions (Yule-Simon process and shuffling theory) and introduces search tags (besides the well-known index tags) as a possibility for generating tag distributions. Findings - Search tags are compatible with broad and narrow folksonomies and with all knowledge organization systems (e.g. classification systems and thesauri), while index tags are only applicable in broad folksonomies. Based on these findings, the paper presents a sketch of an algorithm for mining and processing power tags in information retrieval systems. Research limitations/implications - This conceptual approach is in need of empirical evaluation in a concrete retrieval system. Practical implications - Power tags are a new search option for retrieval systems to limit the amount of hits. Originality/value - The paper introduces power tags as a means for enhancing the precision of search results in information retrieval systems that apply folksonomies, e.g. catalogs in Library 2.0environments.
  11. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.02
    0.021304728 = product of:
      0.085218914 = sum of:
        0.025159499 = weight(_text_:web in 3896) [ClassicSimilarity], result of:
          0.025159499 = score(doc=3896,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.034899916 = weight(_text_:world in 3896) [ClassicSimilarity], result of:
          0.034899916 = score(doc=3896,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.025159499 = weight(_text_:web in 3896) [ClassicSimilarity], result of:
          0.025159499 = score(doc=3896,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
      0.25 = coord(3/12)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  12. Pomerantz, J.: Metadata (2015) 0.02
    0.020342497 = product of:
      0.08136999 = sum of:
        0.02905169 = weight(_text_:web in 3800) [ClassicSimilarity], result of:
          0.02905169 = score(doc=3800,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24981049 = fieldWeight in 3800, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3800)
        0.02326661 = weight(_text_:world in 3800) [ClassicSimilarity], result of:
          0.02326661 = score(doc=3800,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 3800, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=3800)
        0.02905169 = weight(_text_:web in 3800) [ClassicSimilarity], result of:
          0.02905169 = score(doc=3800,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.24981049 = fieldWeight in 3800, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3800)
      0.25 = coord(3/12)
    
    Abstract
    When "metadata" became breaking news, appearing in stories about surveillance by the National Security Agency, many members of the public encountered this once-obscure term from information science for the first time. Should people be reassured that the NSA was "only" collecting metadata about phone calls -- information about the caller, the recipient, the time, the duration, the location -- and not recordings of the conversations themselves? Or does phone call metadata reveal more than it seems? In this book, Jeffrey Pomerantz offers an accessible and concise introduction to metadata. In the era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. We interact with it or generate it every day. It is not, Pomerantz tell us, just "data about data." It is a means by which the complexity of an object is represented in a simpler form. For example, the title, the author, and the cover art are metadata about a book. When metadata does its job well, it fades into the background; everyone (except perhaps the NSA) takes it for granted. Pomerantz explains what metadata is, and why it exists. He distinguishes among different types of metadata -- descriptive, administrative, structural, preservation, and use -- and examines different users and uses of each type. He discusses the technologies that make modern metadata possible, and he speculates about metadata's future. By the end of the book, readers will see metadata everywhere. Because, Pomerantz warns us, it's metadata's world, and we are just living in it.
    Content
    Introduction -- Definitions -- Descriptive metadata -- Administrative metadata -- Use metadata -- Enabling technologies for metadata -- The Semantic Web -- The future of metadata.
    RSWK
    Metadaten / Semantic Web / Metadatenmodell
    Subject
    Metadaten / Semantic Web / Metadatenmodell
  13. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.02
    0.018900909 = product of:
      0.075603634 = sum of:
        0.02935275 = weight(_text_:web in 2606) [ClassicSimilarity], result of:
          0.02935275 = score(doc=2606,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 2606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.02935275 = weight(_text_:web in 2606) [ClassicSimilarity], result of:
          0.02935275 = score(doc=2606,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 2606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.016898135 = product of:
          0.03379627 = sum of:
            0.03379627 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.03379627 = score(doc=2606,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  14. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.02
    0.018900909 = product of:
      0.075603634 = sum of:
        0.02935275 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.02935275 = score(doc=3283,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.02935275 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.02935275 = score(doc=3283,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.016898135 = product of:
          0.03379627 = sum of:
            0.03379627 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.03379627 = score(doc=3283,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Theme
    Semantic Web
  15. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.017842902 = product of:
      0.07137161 = sum of:
        0.029650755 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.029650755 = score(doc=4550,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.029650755 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.029650755 = score(doc=4550,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.012070097 = product of:
          0.024140194 = sum of:
            0.024140194 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.024140194 = score(doc=4550,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  16. Wartburg, K. von; Sibille, C.; Aliverti, C.: Metadata collaboration between the Swiss National Library and research institutions in the field of Swiss historiography (2019) 0.02
    0.016200779 = product of:
      0.064803116 = sum of:
        0.025159499 = weight(_text_:web in 5272) [ClassicSimilarity], result of:
          0.025159499 = score(doc=5272,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 5272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5272)
        0.025159499 = weight(_text_:web in 5272) [ClassicSimilarity], result of:
          0.025159499 = score(doc=5272,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.21634221 = fieldWeight in 5272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5272)
        0.014484116 = product of:
          0.028968232 = sum of:
            0.028968232 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.028968232 = score(doc=5272,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.23214069 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    This article presents examples of metadata collaborations between the Swiss National Library (NL) and research institutions in the field of Swiss historiography. The NL publishes the Bibliography on Swiss History (BSH). In order to meet the demands of its research community, the NL has improved the accessibility and interoperability of the BSH database. Moreover, the BSH takes part in metadata projects such as Metagrid, a web service linking different historical databases. Other metadata collaborations with partners in the historical field such as the Law Sources Foundation (LSF) will position the BSH as an indispensable literature hub for publications on Swiss history.
    Date
    30. 5.2019 19:22:49
  17. DeZelar-Tiedman, C.: Exploring user-contributed metadata's potential to enhance access to literary works (2011) 0.02
    0.0161372 = product of:
      0.09682319 = sum of:
        0.08233908 = weight(_text_:tagging in 2595) [ClassicSimilarity], result of:
          0.08233908 = score(doc=2595,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.39137518 = fieldWeight in 2595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.046875 = fieldNorm(doc=2595)
        0.014484116 = product of:
          0.028968232 = sum of:
            0.028968232 = weight(_text_:22 in 2595) [ClassicSimilarity], result of:
              0.028968232 = score(doc=2595,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.23214069 = fieldWeight in 2595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2595)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Abstract
    Academic libraries have moved toward providing social networking features, such as tagging, in their library catalogs. To explore whether user tags can enhance access to individual literary works, the author obtained a sample of individual works of English and American literature from the twentieth and twenty-first centuries from a large academic library catalog and searched them in LibraryThing. The author compared match rates, the availability of subject headings and tags across various literary forms, and the terminology used in tags versus controlled-vocabulary headings on a subset of records. In addition, she evaluated the usefulness of available LibraryThing tags for the library catalog records that lacked subject headings. Options for utilizing the subject terms available in sources outside the local catalog also are discussed.
    Date
    10. 9.2000 17:38:22
  18. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.02
    0.015813736 = product of:
      0.094882414 = sum of:
        0.047441207 = weight(_text_:web in 1075) [ClassicSimilarity], result of:
          0.047441207 = score(doc=1075,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.4079388 = fieldWeight in 1075, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1075)
        0.047441207 = weight(_text_:web in 1075) [ClassicSimilarity], result of:
          0.047441207 = score(doc=1075,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.4079388 = fieldWeight in 1075, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1075)
      0.16666667 = coord(2/12)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  19. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.02
    0.01562732 = product of:
      0.093763925 = sum of:
        0.046881963 = weight(_text_:web in 3895) [ClassicSimilarity], result of:
          0.046881963 = score(doc=3895,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.40312994 = fieldWeight in 3895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.046881963 = weight(_text_:web in 3895) [ClassicSimilarity], result of:
          0.046881963 = score(doc=3895,freq=10.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.40312994 = fieldWeight in 3895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.16666667 = coord(2/12)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  20. Alemu, G.: ¬A theory of metadata enriching and filtering (2016) 0.02
    0.015008157 = product of:
      0.06003263 = sum of:
        0.016773 = weight(_text_:web in 5068) [ClassicSimilarity], result of:
          0.016773 = score(doc=5068,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 5068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5068)
        0.016773 = weight(_text_:web in 5068) [ClassicSimilarity], result of:
          0.016773 = score(doc=5068,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 5068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5068)
        0.02648663 = product of:
          0.05297326 = sum of:
            0.05297326 = weight(_text_:2.0 in 5068) [ClassicSimilarity], result of:
              0.05297326 = score(doc=5068,freq=2.0), product of:
                0.20667298 = queryWeight, product of:
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2563144 = fieldWeight in 5068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.799733 = idf(docFreq=363, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5068)
          0.5 = coord(1/2)
      0.25 = coord(3/12)
    
    Abstract
    This paper presents a new theory of metadata enriching and filtering. The theory emerged from a rigorous grounded theory data analysis of 57 in-depth interviews with metadata experts, library and information science researchers, librarians as well as academic library users (G. Alemu, A Theory of Digital Library Metadata: The Emergence of Enriching and Filtering, University of Portsmouth PhD thesis, Portsmouth, 2014). Partly due to the novelty of Web 2.0 approaches and mainly due to the absence of foundational theories to underpin socially constructed metadata approaches, this research adapted a social constructivist philosophical approach and a constructivist grounded theory method (K.?Charmaz, Constructing Grounded Theory: A Practical Guide through Qualitative Analysis, SAGE Publications, London, 2006). The theory espouses the importance of enriching information objects with descriptions pertaining to the about-ness of information objects. Such richness and diversity of descriptions, it is argued, could chiefly be achieved by involving users in the metadata creation process. The theory includes four overarching metadata principles - metadata enriching, linking, openness and filtering. The theory proposes a mixed metadata approach where metadata experts provide the requisite basic descriptive metadata, structure and interoperability (a priori metadata) while users continually enrich it with their own interpretations (post-hoc metadata). Enriched metadata is inter- and cross-linked (the principle of linking), made openly accessible (the principle of openness) and presented (the principle of filtering) according to user needs. It is argued that enriched, interlinked and open metadata effectively rises and scales to the challenges presented by the growing digital collections and changing user expectations. This metadata approach allows users to pro-actively engage in co-creating metadata, hence enhancing the findability, discoverability and subsequent usage of information resources. This paper concludes by indicating the current challenges and opportunities to implement the theory of metadata enriching and filtering.

Types

Subjects