Search (64 results, page 1 of 4)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.05
    0.051763047 = product of:
      0.10352609 = sum of:
        0.036585998 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.036585998 = score(doc=3283,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.04587784 = weight(_text_:computer in 3283) [ClassicSimilarity], result of:
          0.04587784 = score(doc=3283,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.04212451 = score(doc=3283,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Series
    Communications in computer and information science; 672
    Theme
    Semantic Web
  2. Gartner, R.: Metadata : shaping knowledge from antiquity to the semantic web (2016) 0.04
    0.044178404 = product of:
      0.1325352 = sum of:
        0.052265707 = weight(_text_:web in 731) [ClassicSimilarity], result of:
          0.052265707 = score(doc=731,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 731, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.0802695 = weight(_text_:computer in 731) [ClassicSimilarity], result of:
          0.0802695 = score(doc=731,freq=12.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.4945153 = fieldWeight in 731, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
      0.33333334 = coord(2/6)
    
    Abstract
    This book offers a comprehensive guide to the world of metadata, from its origins in the ancient cities of the Middle East, to the Semantic Web of today. The author takes us on a journey through the centuries-old history of metadata up to the modern world of crowdsourcing and Google, showing how metadata works and what it is made of. The author explores how it has been used ideologically and how it can never be objective. He argues how central it is to human cultures and the way they develop. Metadata: Shaping Knowledge from Antiquity to the Semantic Web is for all readers with an interest in how we humans organize our knowledge and why this is important. It is suitable for those new to the subject as well as those know its basics. It also makes an excellent introduction for students of information science and librarianship.
    LCSH
    Computer science
    Computer applications in arts and humanities
    Popular computer science
    Subject
    Computer science
    Computer applications in arts and humanities
    Popular computer science
    Theme
    Semantic Web
  3. Metadata and semantics research : 9th Research Conference, MTSR 2015, Manchester, UK, September 9-11, 2015, Proceedings (2015) 0.04
    0.04409325 = product of:
      0.13227975 = sum of:
        0.04434892 = weight(_text_:web in 3274) [ClassicSimilarity], result of:
          0.04434892 = score(doc=3274,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3059541 = fieldWeight in 3274, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3274)
        0.087930836 = weight(_text_:computer in 3274) [ClassicSimilarity], result of:
          0.087930836 = score(doc=3274,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.5417144 = fieldWeight in 3274, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3274)
      0.33333334 = coord(2/6)
    
    Content
    The papers are organized in several sessions and tracks: general track on ontology evolution, engineering, and frameworks, semantic Web and metadata extraction, modelling, interoperability and exploratory search, data analysis, reuse and visualization; track on digital libraries, information retrieval, linked and social data; track on metadata and semantics for open repositories, research information systems and data infrastructure; track on metadata and semantics for agriculture, food and environment; track on metadata and semantics for cultural collections and applications; track on European and national projects.
    LCSH
    Computer science
    Text processing (Computer science)
    Series
    Communications in computer and information science; 544
    Subject
    Computer science
    Text processing (Computer science)
    Theme
    Semantic Web
  4. Woitas, K.: Bibliografische Daten, Normdaten und Metadaten im Semantic Web : Konzepte der bibliografischen Kontrolle im Wandel (2010) 0.04
    0.042185646 = product of:
      0.12655693 = sum of:
        0.06812209 = weight(_text_:wide in 115) [ClassicSimilarity], result of:
          0.06812209 = score(doc=115,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.34615302 = fieldWeight in 115, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
        0.05843484 = weight(_text_:web in 115) [ClassicSimilarity], result of:
          0.05843484 = score(doc=115,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 115, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=115)
      0.33333334 = coord(2/6)
    
    Abstract
    Bibliografische Daten, Normdaten und Metadaten im Semantic Web - Konzepte der Bibliografischen Kontrolle im Wandel. Der Titel dieser Arbeit zielt in ein essentielles Feld der Bibliotheks- und Informationswissenschaft, die Bibliografische Kontrolle. Als zweites zentrales Konzept wird der in der Weiterentwicklung des World Wide Webs (WWW) bedeutsame Begriff des Semantic Webs genannt. Auf den ersten Blick handelt es sich hier um einen ungleichen Wettstreit. Auf der einen Seite die Bibliografische Kontrolle, welche die Methoden und Mittel zur Erschließung von bibliothekarischen Objekten umfasst und traditionell in Form von formal-inhaltlichen Surrogaten in Katalogen daherkommt. Auf der anderen Seite das Buzzword Semantic Web mit seinen hochtrabenden Konnotationen eines durch Selbstreferenzialität "bedeutungstragenden", wenn nicht sogar "intelligenten" Webs. Wie kamen also eine wissenschaftliche Bibliothekarin und ein Mitglied des World Wide Web Consortiums 2007 dazu, gemeinsam einen Aufsatz zu publizieren und darin zu behaupten, das semantische Netz würde ein "bibliothekarischeres" Netz sein? Um sich dieser Frage zu nähern, soll zunächst kurz die historische Entwicklung der beiden Informationssphären Bibliothek und WWW gemeinsam betrachtet werden. Denn so oft - und völlig zurecht - die informationelle Revolution durch das Internet beschworen wird, so taucht auch immer wieder das Analogon einer weltweiten, virtuellen Bibliothek auf. Genauer gesagt, nahmen die theoretischen Überlegungen, die später zur Entwicklung des Internets führen sollten, ihren Ausgangspunkt (neben Kybernetik und entstehender Computertechnik) beim Konzept des Informationsspeichers Bibliothek.
    Theme
    Semantic Web
  5. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.03
    0.03313618 = product of:
      0.099408545 = sum of:
        0.026132854 = weight(_text_:web in 2192) [ClassicSimilarity], result of:
          0.026132854 = score(doc=2192,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 2192, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.07327569 = weight(_text_:computer in 2192) [ClassicSimilarity], result of:
          0.07327569 = score(doc=2192,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.45142862 = fieldWeight in 2192, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
      0.33333334 = coord(2/6)
    
    LCSH
    Computer science
    Text processing (Computer science)
    Series
    Communications in computer and information science; 478
    Subject
    Computer science
    Text processing (Computer science)
    Theme
    Semantic Web
  6. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.03
    0.031876236 = product of:
      0.09562871 = sum of:
        0.06553978 = weight(_text_:computer in 3280) [ClassicSimilarity], result of:
          0.06553978 = score(doc=3280,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=3280)
        0.030088935 = product of:
          0.06017787 = sum of:
            0.06017787 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.06017787 = score(doc=3280,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  7. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.03
    0.031876236 = product of:
      0.09562871 = sum of:
        0.06553978 = weight(_text_:computer in 3281) [ClassicSimilarity], result of:
          0.06553978 = score(doc=3281,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 3281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=3281)
        0.030088935 = product of:
          0.06017787 = sum of:
            0.06017787 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.06017787 = score(doc=3281,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.38690117 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  8. Mora-Mcginity, M. et al.: MusicWeb: music discovery with open linked semantic metadata (2016) 0.03
    0.031876236 = product of:
      0.09562871 = sum of:
        0.06553978 = weight(_text_:computer in 3282) [ClassicSimilarity], result of:
          0.06553978 = score(doc=3282,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.40377006 = fieldWeight in 3282, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=3282)
        0.030088935 = product of:
          0.06017787 = sum of:
            0.06017787 = weight(_text_:22 in 3282) [ClassicSimilarity], result of:
              0.06017787 = score(doc=3282,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.38690117 = fieldWeight in 3282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3282)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Series
    Communications in computer and information science; 672
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  9. Assumpção, F.S.; Santarem Segundo, J.E.; Ventura Amorim da Costa Santos, P.L.: RDA element sets and RDA value vocabularies : vocabularies for resource description in the Semantic Web (2015) 0.03
    0.031213328 = product of:
      0.093639985 = sum of:
        0.054316122 = weight(_text_:web in 2389) [ClassicSimilarity], result of:
          0.054316122 = score(doc=2389,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.37471575 = fieldWeight in 2389, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2389)
        0.039323866 = weight(_text_:computer in 2389) [ClassicSimilarity], result of:
          0.039323866 = score(doc=2389,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.24226204 = fieldWeight in 2389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2389)
      0.33333334 = coord(2/6)
    
    Abstract
    Considering the need for metadata standards suitable for the Semantic Web, this paper describes the RDA Element Sets and the RDA Value Vocabularies that were created from attributes and relationships defined in Resource Description and Access (RDA). First, we present the vocabularies included in RDA Element Sets: the vocabularies of classes, of properties and of properties unconstrained by FRBR entities; and then we present the RDA Value Vocabularies, which are under development. As a conclusion, we highlight that these vocabularies can be used to meet the needs of different contexts due to the unconstrained properties and to the independence of the vocabularies of properties from the vocabularies of values and vice versa.
    Series
    Communications in computer and information science; 544
    Theme
    Semantic Web
  10. Managing metadata in web-scale discovery systems (2016) 0.03
    0.029915132 = product of:
      0.089745395 = sum of:
        0.03853567 = weight(_text_:wide in 3336) [ClassicSimilarity], result of:
          0.03853567 = score(doc=3336,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 3336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
        0.051209725 = weight(_text_:web in 3336) [ClassicSimilarity], result of:
          0.051209725 = score(doc=3336,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.35328537 = fieldWeight in 3336, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
      0.33333334 = coord(2/6)
    
    Abstract
    This book shows you how to harness the power of linked data and web-scale discovery systems to manage and link widely varied content across your library collection. Libraries are increasingly using web-scale discovery systems to help clients find a wide assortment of library materials, including books, journal articles, special collections, archival collections, videos, music and open access collections. Depending on the library material catalogued, the discovery system might need to negotiate different metadata standards, such as AACR, RDA, RAD, FOAF, VRA Core, METS, MODS, RDF and more. In Managing Metadata in Web-Scale Discovery Systems, editor Louise Spiteri and a range of international experts show you how to: * maximize the effectiveness of web-scale discovery systems * provide a smooth and seamless discovery experience to your users * help users conduct searches that yield relevant results * manage the sheer volume of items to which you can provide access, so your users can actually find what they need * maintain shared records that reflect the needs, languages, and identities of culturally and ethnically varied communities * manage metadata both within, across, and outside, library discovery tools by converting your library metadata to linked open data that all systems can access * manage user generated metadata from external services such as Goodreads and LibraryThing * mine user generated metadata to better serve your users in areas such as collection development or readers' advisory. The book will be essential reading for cataloguers, technical services and systems librarians and library and information science students studying modules on metadata, cataloguing, systems design, data management, and digital libraries. The book will also be of interest to those managing metadata in archives, museums and other cultural heritage institutions.
    Content
    1. Introduction: the landscape of web-scale discovery - Louise Spiteri 2. Sharing metadata across discovery systems - Marshall Breeding, Angela Kroeger and Heather Moulaison Sandy 3. Managing linked open data across discovery systems - Ali Shiri and Danoosh Davoodi 4. Redefining library resources in discovery systems - Christine DeZelar-Tiedman 5. Managing volume in discovery systems - Aaron Tay 6. Managing outsourced metadata in discovery systems - Laurel Tarulli 7. Managing user-generated metadata in discovery systems - Louise Spiteri
  11. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.03
    0.028375676 = product of:
      0.085127026 = sum of:
        0.04816959 = weight(_text_:wide in 4048) [ClassicSimilarity], result of:
          0.04816959 = score(doc=4048,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.036957435 = weight(_text_:web in 4048) [ClassicSimilarity], result of:
          0.036957435 = score(doc=4048,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 4048, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
  12. Handbook of metadata, semantics and ontologies (2014) 0.02
    0.024915472 = product of:
      0.074746415 = sum of:
        0.03853567 = weight(_text_:wide in 5134) [ClassicSimilarity], result of:
          0.03853567 = score(doc=5134,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 5134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
        0.036210746 = weight(_text_:web in 5134) [ClassicSimilarity], result of:
          0.036210746 = score(doc=5134,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.24981049 = fieldWeight in 5134, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
      0.33333334 = coord(2/6)
    
    Abstract
    Metadata research has emerged as a discipline cross-cutting many domains, focused on the provision of distributed descriptions (often called annotations) to Web resources or applications. Such associated descriptions are supposed to serve as a foundation for advanced services in many application areas, including search and location, personalization, federation of repositories and automated delivery of information. Indeed, the Semantic Web is in itself a concrete technological framework for ontology-based metadata. For example, Web-based social networking requires metadata describing people and their interrelations, and large databases with biological information use complex and detailed metadata schemas for more precise and informed search strategies. There is a wide diversity in the languages and idioms used for providing meta-descriptions, from simple structured text in metadata schemas to formal annotations using ontologies, and the technologies for storing, sharing and exploiting meta-descriptions are also diverse and evolve rapidly. In addition, there is a proliferation of schemas and standards related to metadata, resulting in a complex and moving technological landscape - hence, the need for specialized knowledge and skills in this area. The Handbook of Metadata, Semantics and Ontologies is intended as an authoritative reference for students, practitioners and researchers, serving as a roadmap for the variety of metadata schemas and ontologies available in a number of key domain areas, including culture, biology, education, healthcare, engineering and library science.
  13. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.02
    0.019216085 = product of:
      0.057648253 = sum of:
        0.036585998 = weight(_text_:web in 2606) [ClassicSimilarity], result of:
          0.036585998 = score(doc=2606,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 2606, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.04212451 = score(doc=2606,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  14. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.02
    0.017333968 = product of:
      0.0520019 = sum of:
        0.036957435 = weight(_text_:web in 3524) [ClassicSimilarity], result of:
          0.036957435 = score(doc=3524,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 3524, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
              0.030088935 = score(doc=3524,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 3524, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3524)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  15. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.02
    0.017333968 = product of:
      0.0520019 = sum of:
        0.036957435 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.036957435 = score(doc=4550,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.030088935 = score(doc=4550,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  16. Wartburg, K. von; Sibille, C.; Aliverti, C.: Metadata collaboration between the Swiss National Library and research institutions in the field of Swiss historiography (2019) 0.02
    0.016470928 = product of:
      0.049412783 = sum of:
        0.031359423 = weight(_text_:web in 5272) [ClassicSimilarity], result of:
          0.031359423 = score(doc=5272,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.21634221 = fieldWeight in 5272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5272)
        0.01805336 = product of:
          0.03610672 = sum of:
            0.03610672 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.03610672 = score(doc=5272,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.23214069 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This article presents examples of metadata collaborations between the Swiss National Library (NL) and research institutions in the field of Swiss historiography. The NL publishes the Bibliography on Swiss History (BSH). In order to meet the demands of its research community, the NL has improved the accessibility and interoperability of the BSH database. Moreover, the BSH takes part in metadata projects such as Metagrid, a web service linking different historical databases. Other metadata collaborations with partners in the historical field such as the Law Sources Foundation (LSF) will position the BSH as an indispensable literature hub for publications on Swiss history.
    Date
    30. 5.2019 19:22:49
  17. Miller, S.: Introduction to ontology concepts and terminology : DC-2013 Tutorial, September 2, 2013. (2013) 0.01
    0.009855317 = product of:
      0.059131898 = sum of:
        0.059131898 = weight(_text_:web in 1075) [ClassicSimilarity], result of:
          0.059131898 = score(doc=1075,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 1075, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1075)
      0.16666667 = coord(1/6)
    
    Content
    Tutorial topics and outline 1. Tutorial Background Overview The Semantic Web, Linked Data, and the Resource Description Framework 2. Ontology Basics and RDFS Tutorial Semantic modeling, domain ontologies, and RDF Vocabulary Description Language (RDFS) concepts and terminology Examples: domain ontologies, models, and schemas Exercises 3. OWL Overview Tutorial Web Ontology Language (OWL): selected concepts and terminology Exercises
  18. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.01
    0.00973914 = product of:
      0.05843484 = sum of:
        0.05843484 = weight(_text_:web in 3895) [ClassicSimilarity], result of:
          0.05843484 = score(doc=3895,freq=10.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.40312994 = fieldWeight in 3895, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.16666667 = coord(1/6)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  19. Husevag, A.-S.R.: Named entities in indexing : a case study of TV subtitles and metadata records (2016) 0.01
    0.009717524 = product of:
      0.058305144 = sum of:
        0.058305144 = product of:
          0.11661029 = sum of:
            0.11661029 = weight(_text_:programs in 3105) [ClassicSimilarity], result of:
              0.11661029 = score(doc=3105,freq=4.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.45288983 = fieldWeight in 3105, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3105)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper explores the possible role of named entities in an automatic index-ing process, based on text in subtitles. This is done by analyzing entity types, name den-sity and name frequencies in subtitles and metadata records from different TV programs. The name density in metadata records is much higher than the name density in subtitles, and named entities with high frequencies in the subtitles are more likely to be mentioned in the metadata records. Personal names, geographical names and names of organizations where the most prominent entity types in both the news subtitles and news metadata, while persons, works and locations are the most prominent in culture programs.
  20. Park, J.-R.; Tosaka, Y.: Metadata quality control in digital repositories and collections : criteria, semantics, and mechanisms (2010) 0.01
    0.009633917 = product of:
      0.057803504 = sum of:
        0.057803504 = weight(_text_:wide in 4163) [ClassicSimilarity], result of:
          0.057803504 = score(doc=4163,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.29372054 = fieldWeight in 4163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4163)
      0.16666667 = coord(1/6)
    
    Abstract
    This article evaluates practices on metadata quality control in digital repositories and collections using an online survey of cataloging and metadata professionals in the United States. The study examines (1) the perceived importance of metadata quality, (2) metadata quality evaluation criteria and issues, and (3) mechanisms for building quality assurance into the metadata creation process. The survey finds wide recognition of the essential role of metadata quality assurance. Accuracy and consistency are prioritized as the main criteria for metadata quality evaluation. Metadata semantics greatly affects consistent and accurate metadata application. Strong awareness of metadata quality correlates with the widespread adoption of various quality control mechanisms, such as staff training, manual review, metadata guidelines, and metadata generation tools. And yet, metadata guidelines are used less frequently as a quality assurance mechanism in digital collections involving multiple institutions.

Authors

Languages

  • e 57
  • d 6
  • pt 1
  • More… Less…

Types

  • a 48
  • el 12
  • m 11
  • s 6
  • x 2
  • More… Less…

Subjects