Search (116 results, page 1 of 6)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.04
    0.037692975 = product of:
      0.09423243 = sum of:
        0.070290476 = weight(_text_:philosophy in 4048) [ClassicSimilarity], result of:
          0.070290476 = score(doc=4048,freq=2.0), product of:
            0.23055021 = queryWeight, product of:
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.04177434 = queryNorm
            0.30488142 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5189433 = idf(docFreq=481, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.023941955 = weight(_text_:of in 4048) [ClassicSimilarity], result of:
          0.023941955 = score(doc=4048,freq=36.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.36650562 = fieldWeight in 4048, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
    Source
    Journal of documentation. 74(2018) no.1, S.36-61
  2. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.02
    0.020918151 = product of:
      0.034863584 = sum of:
        0.0066520358 = product of:
          0.033260178 = sum of:
            0.033260178 = weight(_text_:problem in 367) [ClassicSimilarity], result of:
              0.033260178 = score(doc=367,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.1875815 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.2 = coord(1/5)
        0.016891856 = weight(_text_:of in 367) [ClassicSimilarity], result of:
          0.016891856 = score(doc=367,freq=28.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25858206 = fieldWeight in 367, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=367)
        0.011319693 = product of:
          0.022639386 = sum of:
            0.022639386 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
              0.022639386 = score(doc=367,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.15476047 = fieldWeight in 367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=367)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1505-1520
  3. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.02
    0.015834233 = product of:
      0.03958558 = sum of:
        0.011286346 = weight(_text_:of in 3280) [ClassicSimilarity], result of:
          0.011286346 = score(doc=3280,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.17277241 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=3280)
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3280,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  4. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.02
    0.015311283 = product of:
      0.038278207 = sum of:
        0.01563882 = weight(_text_:of in 4826) [ClassicSimilarity], result of:
          0.01563882 = score(doc=4826,freq=6.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23940048 = fieldWeight in 4826, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=4826)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
              0.045278773 = score(doc=4826,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    18. 1.2019 19:13:22
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  5. Cho, H.; Donovan, A.; Lee, J.H.: Art in an algorithm : a taxonomy for describing video game visual styles (2018) 0.01
    0.014688924 = product of:
      0.03672231 = sum of:
        0.022572692 = weight(_text_:of in 4218) [ClassicSimilarity], result of:
          0.022572692 = score(doc=4218,freq=32.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.34554482 = fieldWeight in 4218, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4218)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 4218) [ClassicSimilarity], result of:
              0.028299233 = score(doc=4218,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 4218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4218)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The discovery and retrieval of video games in library and information systems is, by and large, dependent on a limited set of descriptive metadata. Noticeably missing from this metadata are classifications of visual style-despite the overwhelmingly visual nature of most video games and the interest in visual style among video game users. One explanation for this paucity is the difficulty in eliciting consistent judgements about visual style, likely due to subjective interpretations of terminology and a lack of demonstrable testing for coinciding judgements. This study presents a taxonomy of video game visual styles constructed from the findings of a 22-participant cataloging user study of visual styles. A detailed description of the study, and its value and shortcomings, are presented along with reflections about the challenges of cultivating consensus about visual style in video games. The high degree of overall agreement in the user study demonstrates the potential value of a descriptor like visual style and the use of a cataloging study in developing visual style taxonomies. The resulting visual style taxonomy, the methods and analysis described herein may help improve the organization and retrieval of video games and possibly other visual materials like graphic designs, illustrations, and animations.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.5, S.633-646
  6. Wartburg, K. von; Sibille, C.; Aliverti, C.: Metadata collaboration between the Swiss National Library and research institutions in the field of Swiss historiography (2019) 0.01
    0.013426805 = product of:
      0.03356701 = sum of:
        0.016587472 = weight(_text_:of in 5272) [ClassicSimilarity], result of:
          0.016587472 = score(doc=5272,freq=12.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.25392252 = fieldWeight in 5272, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=5272)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.033959076 = score(doc=5272,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article presents examples of metadata collaborations between the Swiss National Library (NL) and research institutions in the field of Swiss historiography. The NL publishes the Bibliography on Swiss History (BSH). In order to meet the demands of its research community, the NL has improved the accessibility and interoperability of the BSH database. Moreover, the BSH takes part in metadata projects such as Metagrid, a web service linking different historical databases. Other metadata collaborations with partners in the historical field such as the Law Sources Foundation (LSF) will position the BSH as an indispensable literature hub for publications on Swiss history.
    Date
    30. 5.2019 19:22:49
    Footnote
    Beitrag in einem Themenheft: 'The Role and Function of National Bibliographies for Research'.
  7. Long, K.; Thompson, S.; Potvin, S.; Rivero, M.: ¬The "wicked problem" of neutral description : toward a documentation approach to metadata standards (2017) 0.01
    0.01339748 = product of:
      0.0334937 = sum of:
        0.0133040715 = product of:
          0.066520356 = sum of:
            0.066520356 = weight(_text_:problem in 5146) [ClassicSimilarity], result of:
              0.066520356 = score(doc=5146,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.375163 = fieldWeight in 5146, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5146)
          0.2 = coord(1/5)
        0.02018963 = weight(_text_:of in 5146) [ClassicSimilarity], result of:
          0.02018963 = score(doc=5146,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.3090647 = fieldWeight in 5146, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5146)
      0.4 = coord(2/5)
    
    Abstract
    Increasingly, metadata standards have been recognized as constructed rather than neutral. In this article, we argue for the importance of a documentation approach to metadata standards creation as a codification of this growing recognition. By making design decisions explicit, the documentation approach dispels presumptions of neutrality and, drawing on the "wicked problems" theoretical framework, acknowledges the constructed nature of standards as "clumsy solutions."
  8. Baker, T.: Dublin Core Application Profiles : current approaches (2010) 0.01
    0.012848704 = product of:
      0.03212176 = sum of:
        0.015142222 = weight(_text_:of in 3737) [ClassicSimilarity], result of:
          0.015142222 = score(doc=3737,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23179851 = fieldWeight in 3737, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3737)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 3737) [ClassicSimilarity], result of:
              0.033959076 = score(doc=3737,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 3737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3737)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Dublin Core Metadata Initiative currently defines a Dublin Core Application Profile as a set of specifications about the metadata design of a particular application or for a particular domain or community of users. The current approach to application profiles is summarized in the Singapore Framework for Application Profiles [SINGAPORE-FRAMEWORK] (see Figure 1). While the approach originally developed as a means of specifying customized applications based on the fifteen elements of the Dublin Core Element Set (e.g., Title, Date, Subject), it has evolved into a generic approach to creating metadata that meets specific local requirements while integrating coherently with other RDF-based metadata.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  9. DeZelar-Tiedman, C.: Exploring user-contributed metadata's potential to enhance access to literary works (2011) 0.01
    0.012848704 = product of:
      0.03212176 = sum of:
        0.015142222 = weight(_text_:of in 2595) [ClassicSimilarity], result of:
          0.015142222 = score(doc=2595,freq=10.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.23179851 = fieldWeight in 2595, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2595)
        0.016979538 = product of:
          0.033959076 = sum of:
            0.033959076 = weight(_text_:22 in 2595) [ClassicSimilarity], result of:
              0.033959076 = score(doc=2595,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.23214069 = fieldWeight in 2595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2595)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Academic libraries have moved toward providing social networking features, such as tagging, in their library catalogs. To explore whether user tags can enhance access to individual literary works, the author obtained a sample of individual works of English and American literature from the twentieth and twenty-first centuries from a large academic library catalog and searched them in LibraryThing. The author compared match rates, the availability of subject headings and tags across various literary forms, and the terminology used in tags versus controlled-vocabulary headings on a subset of records. In addition, she evaluated the usefulness of available LibraryThing tags for the library catalog records that lacked subject headings. Options for utilizing the subject terms available in sources outside the local catalog also are discussed.
    Date
    10. 9.2000 17:38:22
  10. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.01
    0.012667385 = product of:
      0.03166846 = sum of:
        0.009029076 = weight(_text_:of in 1953) [ClassicSimilarity], result of:
          0.009029076 = score(doc=1953,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.13821793 = fieldWeight in 1953, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1953)
        0.022639386 = product of:
          0.045278773 = sum of:
            0.045278773 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.045278773 = score(doc=1953,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Scientific repositories create a new environment for studying traditional information science issues. The interaction between indexing terms provided by users and controlled vocabularies continues to be an area of debate and study. This article reports and analyzes findings from a study that mapped the relationships between free text keywords and controlled vocabulary terms used in the sciences. Based on this study's findings recommendations are made about which vocabularies may be better to use in scientific data repositories.
    Date
    29. 5.2015 19:09:22
  11. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.01
    0.01239295 = product of:
      0.030982375 = sum of:
        0.011172912 = weight(_text_:of in 2606) [ClassicSimilarity], result of:
          0.011172912 = score(doc=2606,freq=4.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.17103596 = fieldWeight in 2606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2606)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
              0.039618924 = score(doc=2606,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 2606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2606)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  12. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.01
    0.012044367 = product of:
      0.030110918 = sum of:
        0.015961302 = weight(_text_:of in 4550) [ClassicSimilarity], result of:
          0.015961302 = score(doc=4550,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.24433708 = fieldWeight in 4550, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.028299233 = score(doc=4550,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  13. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.01
    0.01163202 = product of:
      0.029080048 = sum of:
        0.014930432 = weight(_text_:of in 3524) [ClassicSimilarity], result of:
          0.014930432 = score(doc=3524,freq=14.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.22855641 = fieldWeight in 3524, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.0141496165 = product of:
          0.028299233 = sum of:
            0.028299233 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
              0.028299233 = score(doc=3524,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.19345059 = fieldWeight in 3524, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3524)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.4, S.830-844
  14. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.011083962 = product of:
      0.027709905 = sum of:
        0.007900442 = weight(_text_:of in 3283) [ClassicSimilarity], result of:
          0.007900442 = score(doc=3283,freq=2.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.120940685 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.019809462 = product of:
          0.039618924 = sum of:
            0.039618924 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.039618924 = score(doc=3283,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  15. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.01
    0.008371304 = product of:
      0.02092826 = sum of:
        0.0066520358 = product of:
          0.033260178 = sum of:
            0.033260178 = weight(_text_:problem in 2320) [ClassicSimilarity], result of:
              0.033260178 = score(doc=2320,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.1875815 = fieldWeight in 2320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2320)
          0.2 = coord(1/5)
        0.014276223 = weight(_text_:of in 2320) [ClassicSimilarity], result of:
          0.014276223 = score(doc=2320,freq=20.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.21854173 = fieldWeight in 2320, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
    Source
    Journal of documentation. 71(2015) no.5, S.976-998
  16. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.01
    0.007768431 = product of:
      0.019421078 = sum of:
        0.0066520358 = product of:
          0.033260178 = sum of:
            0.033260178 = weight(_text_:problem in 117) [ClassicSimilarity], result of:
              0.033260178 = score(doc=117,freq=2.0), product of:
                0.17731056 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.04177434 = queryNorm
                0.1875815 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.2 = coord(1/5)
        0.0127690425 = weight(_text_:of in 117) [ClassicSimilarity], result of:
          0.0127690425 = score(doc=117,freq=16.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.19546966 = fieldWeight in 117, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=117)
      0.4 = coord(2/5)
    
    Abstract
    Information management in corporate environments is a growing problem as companies' information assets grow and their need to use them in their operations. Several management models have been practiced with application on the most diverse fronts, practices that integrate the so-called Enterprise Content Management. This study proposes a conceptual model of semantic corporate information ecosystem, based on the Universal Document Model proposed by Dagobert Soergel. It focuses on unstructured information objects, especially multimedia, increasingly used in corporate environments, adding semantics and expanding their recovery potential in the composition and reuse of dynamic documents on demand. The proposed model considers stable elements in the organizational environment, such as actors, processes, business metadata and information objects, as well as some basic infrastructures of the corporate information environment. The main objective is to establish a conceptual model that adds semantic intelligence to information assets, leveraging pre-existing infrastructure in organizations, integrating and relating objects to other objects, actors and business processes. The approach methodology considered the state of the art of Information Organization, Representation and Retrieval, Organizational Content Management and Semantic Web technologies, in the scientific literature, as bases for the establishment of an integrative conceptual model. Therefore, the research will be qualitative and exploratory. The predicted steps of the model are: Environment, Data Type and Source Definition, Data Distillation, Metadata Enrichment, and Storage. As a result, in theoretical terms the extended model allows to process heterogeneous and unstructured data according to the established cut-outs and through the processes listed above, allowing value creation in the composition of dynamic information objects, with semantic aggregations to metadata.
  17. Margaritopoulos, M.; Margaritopoulos, T.; Mavridis, I.; Manitsaris, A.: Quantifying and measuring metadata completeness (2012) 0.01
    0.005746069 = product of:
      0.028730344 = sum of:
        0.028730344 = weight(_text_:of in 43) [ClassicSimilarity], result of:
          0.028730344 = score(doc=43,freq=36.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.43980673 = fieldWeight in 43, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=43)
      0.2 = coord(1/5)
    
    Abstract
    Completeness of metadata is one of the most essential characteristics of their quality. An incomplete metadata record is a record of degraded quality. Existing approaches to measure metadata completeness limit their scope in counting the existence of values in fields, regardless of the metadata hierarchy as defined in international standards. Such a traditional approach overlooks several issues that need to be taken into account. This paper presents a fine-grained metrics system for measuring metadata completeness, based on field completeness. A metadata field is considered to be a container of multiple pieces of information. In this regard, the proposed system is capable of following the hierarchy of metadata as it is set by the metadata schema and admeasuring the effect of multiple values of multivalued fields. An application of the proposed metrics system, after being configured according to specific user requirements, to measure completeness of a real-world set of metadata is demonstrated. The results prove its ability to assess the sufficiency of metadata to describe a resource and provide targeted measures of completeness throughout the metadata hierarchy.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.4, S.724-737
  18. Hajra, A. et al.: Enriching scientific publications from LOD repositories through word embeddings approach (2016) 0.01
    0.0056598466 = product of:
      0.028299233 = sum of:
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3281) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3281,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  19. Mora-Mcginity, M. et al.: MusicWeb: music discovery with open linked semantic metadata (2016) 0.01
    0.0056598466 = product of:
      0.028299233 = sum of:
        0.028299233 = product of:
          0.056598466 = sum of:
            0.056598466 = weight(_text_:22 in 3282) [ClassicSimilarity], result of:
              0.056598466 = score(doc=3282,freq=2.0), product of:
                0.14628662 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04177434 = queryNorm
                0.38690117 = fieldWeight in 3282, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3282)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  20. Leong, J.H.-t.: ¬The convergence of metadata and bibliographic control? : trends and patterns in addressing the current issues and challenges of providing subject access (2010) 0.01
    0.0050675566 = product of:
      0.025337784 = sum of:
        0.025337784 = weight(_text_:of in 3355) [ClassicSimilarity], result of:
          0.025337784 = score(doc=3355,freq=28.0), product of:
            0.06532493 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.04177434 = queryNorm
            0.38787308 = fieldWeight in 3355, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
      0.2 = coord(1/5)
    
    Abstract
    Resource description and discovery have been facilitated generally in two approaches, namely bibliographic control and metadata, which now may converge in response to current issues and challenges of providing subject access. Four categories of major issues and challenges in the provision of subject access to digital and non-digital resources are: 1) the advancement of new knowledge; 2) the fall of controlled vocabulary and the rise of natural language; 3) digitizing and networking the traditional catalogue systems; and 4) electronic publishing and the Internet. The creation of new knowledge and the debate about the use of natural language and controlled vocabulary as subject headings becomes even more intense in the digital and online environment. The third and fourth categories are conceived after the emergence of networked environments and the rapid expansion of electronic resources. Recognizing the convergence of metadata schemas and bibliographic control calls for adapting to the new environment by developing tools that exploit the strengths of both.

Languages

  • e 112
  • d 3
  • pt 1
  • More… Less…

Types

  • a 98
  • el 17
  • m 13
  • s 7
  • x 1
  • More… Less…

Subjects