Search (430 results, page 22 of 22)

  • × theme_ss:"Metadaten"
  1. Liu, X.; Qin, J.: ¬An interactive metadata model for structural, descriptive, and referential representation of scholarly output (2014) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 1253) [ClassicSimilarity], result of:
              0.007856515 = score(doc=1253,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 5.2014 17:32:09
  2. Stevens, G.: New metadata recipes for old cookbooks : creating and analyzing a digital collection using the HathiTrust Research Center Portal (2017) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 3897) [ClassicSimilarity], result of:
              0.007856515 = score(doc=3897,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 3897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3897)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
  3. Johansson, S.; Golub, K.: LibraryThing for libraries : how tag moderation and size limitations affect tag clouds (2019) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 5398) [ClassicSimilarity], result of:
              0.007856515 = score(doc=5398,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 5398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5398)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 7.2019 18:49:14
  4. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 63) [ClassicSimilarity], result of:
              0.007856515 = score(doc=63,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 63, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=63)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.1, S.32-45
  5. Social tagging in a linked data environment. Edited by Diane Rasmussen Pennington and Louise F. Spiteri. London, UK: Facet Publishing, 2018. 240 pp. £74.95 (paperback). (ISBN 9781783303380) (2019) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 101) [ClassicSimilarity], result of:
              0.007856515 = score(doc=101,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=101)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    978-1-78330-339-7
  6. Bazillion, R.J.; Caplan, P.: Metadata fundamentals for all librarians (2003) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 3197) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=3197,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3197)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    Rez.: JASIST 56(2005) no.13, S.1264 (W. Koehler: "Priscilla Caplan provides us with a sweeping but very welcome survey of the various approaches to metadata in practice or proposed in libraries and archives today. One of the key strengths of the book and paradoxically one of its key weaknesses is that the work is descriptive in nature. While relationships between one system and another may be noted, no general conclusions of a practical or theoretical nature are drawn of the relative merits of one metadata or metametadata scheure as against another. That said, let us remember that this is an American Library Association publication, published as a descriptive resource. Caplan does very well what she sets out to do. The work is divided into two parts: "Principles and Practice" and "Metadata Schemes," and is further subdivided into eighteen chapters. The book begins with short yet more than adequate chapters defining terms, vocabularies, and concepts. It discusses interoperability and the various levels of quality among systems. Perhaps Chapter 5, "Metadata and the Web" is the weakest chapter of the work. There is a brief discussion of how search engines work and some of the more recent initiatives (e.g., the Semantic Web) to develop better retrieval agents. The chapter is weck not in its description but in what it fails to discuss. The second section, "Metadata Schemes," which encompasses chapters six through eighteen, is particularly rich. Thirteen different metadata or metametadata schema are described to provide the interested librarian with a better than adequate introduction to the purpose, application, and operability of each metadata scheme. These are: library cataloging (chiefly MARC), TEI, Dublin Core, Archival Description and EAD, Art and Architecture, GILS, Education, ONIX, Geospatial, Data Documentation Initiative, Administrative Metadata, Structural Metadata, and Rights Metadata. The last three chapters introduce concepts heretofore "foreign" to the realm of the catalog or metadata. Descriptive metadata was . . . intended to help in finding, discovering, and identifying an information resource." (p. 151) Administrative metadata is an aid to ". . . the owners or caretakers of the resource." Structural metadata describe the relationships of data elements. Rights metadata describe (or as Caplan points out, may describe, as definition is still as yet ambiguous) end user rights to use and reproduce material in digital format. Keeping in mind that the work is intended for the general practitioner librarian, the book has a particularly useful glossary and index. Caplan also provides useful suggestions for additional reading at the end of each chapter. 1 intend to adopt Metadata Fundamentals for All Librarians when next I teach a digital cataloging course. Caplan's book provides an excellent introduction to the basic concepts. It is, however, not a "cookbook" nor a guidebook into the complexities of the application of any metadata scheme."
  7. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 1177) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=1177,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  8. Pomerantz, J.: Metadata (2015) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 3800) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=3800,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 3800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3800)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    978-0-262-52851-1
  9. Martins, S. de Castro: Modelo conceitual de ecossistema semântico de informações corporativas para aplicação em objetos multimídia (2019) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 117) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=117,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=117)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Footnote
    https://app.uff.br/riuff/handle/1/13904.
  10. Baroncini, S.; Sartini, B.; Erp, M. Van; Tomasi, F.; Gangemi, A.: Is dc:subject enough? : A landscape on iconography and iconology statements of knowledge graphs in the semantic web (2023) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 1030) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=1030,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 1030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1030)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects. Design/methodology/approach This study's analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians' theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures' suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness. Findings This study's results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity. Originality/value The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study's results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.

Authors

Years

Languages

Types

  • a 373
  • el 44
  • m 22
  • s 16
  • n 6
  • x 5
  • b 2
  • r 1
  • More… Less…

Subjects