Search (89 results, page 1 of 5)

  • × theme_ss:"Metadaten"
  • × year_i:[2010 TO 2020}
  1. Sturmane, A.; Eglite, E.; Jankevica-Balode, M.: Subject metadata development for digital resources in Latvia (2014) 0.14
    0.14479652 = product of:
      0.19306204 = sum of:
        0.120629124 = weight(_text_:digital in 1963) [ClassicSimilarity], result of:
          0.120629124 = score(doc=1963,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.61014175 = fieldWeight in 1963, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1963)
        0.03790077 = weight(_text_:library in 1963) [ClassicSimilarity], result of:
          0.03790077 = score(doc=1963,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 1963, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1963)
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1963) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1963,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1963, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1963)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The National Library of Latvia (NLL) made a decision to use the Library of Congress Subject Headings (LCSH) in 2000. At present the NLL Subject Headings Database in Latvian holds approximately 34,000 subject headings and is used for subject cataloging of textual resources, including articles from serials. For digital objects NLL uses a system like Faceted Application of Subject Terminology (FAST). We succesfully use it in the project "In Search of Lost Latvia," one of the milestones in the development of the subject cataloging of digital resources in Latvia.
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  2. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.13
    0.133226 = product of:
      0.17763469 = sum of:
        0.10339639 = weight(_text_:digital in 3896) [ClassicSimilarity], result of:
          0.10339639 = score(doc=3896,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.52297866 = fieldWeight in 3896, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.022971334 = weight(_text_:library in 3896) [ClassicSimilarity], result of:
          0.022971334 = score(doc=3896,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.051266953 = product of:
          0.10253391 = sum of:
            0.10253391 = weight(_text_:project in 3896) [ClassicSimilarity], result of:
              0.10253391 = score(doc=3896,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.48465237 = fieldWeight in 3896, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3896)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  3. Stevens, G.: New metadata recipes for old cookbooks : creating and analyzing a digital collection using the HathiTrust Research Center Portal (2017) 0.12
    0.12459625 = product of:
      0.16612834 = sum of:
        0.0963339 = weight(_text_:digital in 3897) [ClassicSimilarity], result of:
          0.0963339 = score(doc=3897,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4872566 = fieldWeight in 3897, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3897)
        0.027071979 = weight(_text_:library in 3897) [ClassicSimilarity], result of:
          0.027071979 = score(doc=3897,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 3897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3897)
        0.04272246 = product of:
          0.08544492 = sum of:
            0.08544492 = weight(_text_:project in 3897) [ClassicSimilarity], result of:
              0.08544492 = score(doc=3897,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.40387696 = fieldWeight in 3897, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3897)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
  4. Valentino, M.L.: Integrating metadata creation into catalog workflow (2010) 0.11
    0.11019442 = product of:
      0.1469259 = sum of:
        0.060314562 = weight(_text_:digital in 4160) [ClassicSimilarity], result of:
          0.060314562 = score(doc=4160,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 4160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4160)
        0.026799891 = weight(_text_:library in 4160) [ClassicSimilarity], result of:
          0.026799891 = score(doc=4160,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 4160, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4160)
        0.059811447 = product of:
          0.11962289 = sum of:
            0.11962289 = weight(_text_:project in 4160) [ClassicSimilarity], result of:
              0.11962289 = score(doc=4160,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.5654278 = fieldWeight in 4160, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4160)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The University of Oklahoma Libraries recently undertook a project designed to integrate digital library metadata creation into the workflow of the Cataloging Department. This article examines the conditions and factors that led to the project's genesis, the proposed and revised workflows that were developed, the staff training efforts that accompanied implementation of the project, and the results and benefits obtained through the project's implementation. The project presented several challenges but resulted in an improved workflow, greater use of Cataloging Department resources, and more accurate and useful metadata while increasing the Library's capacity to support digitization efforts in a timely fashion.
  5. Bartczak, J.; Glendon, I.: Python, Google Sheets, and the Thesaurus for Graphic Materials for efficient metadata project workflows (2017) 0.09
    0.09426196 = product of:
      0.1256826 = sum of:
        0.073112294 = weight(_text_:digital in 3893) [ClassicSimilarity], result of:
          0.073112294 = score(doc=3893,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.36980176 = fieldWeight in 3893, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3893)
        0.022971334 = weight(_text_:library in 3893) [ClassicSimilarity], result of:
          0.022971334 = score(doc=3893,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 3893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3893)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 3893) [ClassicSimilarity], result of:
              0.059197973 = score(doc=3893,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 3893, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3893)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In 2017, the University of Virginia (U.Va.) will launch a two year initiative to celebrate the bicentennial anniversary of the University's founding in 1819. The U.Va. Library is participating in this event by digitizing some 20,000 photographs and negatives that document student life on the U.Va. grounds in the 1960s and 1970s. Metadata librarians and archivists are well-versed in the challenges associated with generating digital content and accompanying description within the context of limited resources. This paper describes how technology and new approaches to metadata design have enabled the University of Virginia's Metadata Analysis and Design Department to rapidly and successfully generate accurate description for these digital objects. Python's pandas module improves efficiency by cleaning and repurposing data recorded at digitization, while the lxml module builds MODS XML programmatically from CSV tables. A simplified technique for subject heading selection and assignment in Google Sheets provides a collaborative environment for streamlined metadata creation and data quality control.
  6. Tani, A.; Candela, L.; Castelli, D.: Dealing with metadata quality : the legacy of digital library efforts (2013) 0.08
    0.07544333 = product of:
      0.15088665 = sum of:
        0.10446788 = weight(_text_:digital in 2662) [ClassicSimilarity], result of:
          0.10446788 = score(doc=2662,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.5283983 = fieldWeight in 2662, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2662)
        0.04641878 = weight(_text_:library in 2662) [ClassicSimilarity], result of:
          0.04641878 = score(doc=2662,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.3522223 = fieldWeight in 2662, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2662)
      0.5 = coord(2/4)
    
    Abstract
    In this work, we elaborate on the meaning of metadata quality by surveying efforts and experiences matured in the digital library domain. In particular, an overview of the frameworks developed to characterize such a multi-faceted concept is presented. Moreover, the most common quality-related problems affecting metadata both during the creation and the aggregation phase are discussed together with the approaches, technologies and tools developed to mitigate them. This survey on digital library developments is expected to contribute to the ongoing discussion on data and metadata quality occurring in the emerging yet more general framework of data infrastructures.
  7. Alves dos Santos, E.; Mucheroni, M.L.: VIAF and OpenCitations : cooperative work as a strategy for information organization in the linked data era (2018) 0.06
    0.06232306 = product of:
      0.12464612 = sum of:
        0.097483054 = weight(_text_:digital in 4826) [ClassicSimilarity], result of:
          0.097483054 = score(doc=4826,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 4826, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=4826)
        0.027163066 = product of:
          0.054326132 = sum of:
            0.054326132 = weight(_text_:22 in 4826) [ClassicSimilarity], result of:
              0.054326132 = score(doc=4826,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.30952093 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    18. 1.2019 19:13:22
    Source
    Challenges and opportunities for knowledge organization in the digital age: proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto. Eds.: F. Ribeiro u. M.E. Cerveira
  8. Palavitsinis, N.; Manouselis, N.; Sanchez-Alonso, S.: Metadata quality in digital repositories : empirical results from the cross-domain transfer of a quality assurance process (2014) 0.06
    0.056257617 = product of:
      0.11251523 = sum of:
        0.0895439 = weight(_text_:digital in 1288) [ClassicSimilarity], result of:
          0.0895439 = score(doc=1288,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4529128 = fieldWeight in 1288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1288)
        0.022971334 = weight(_text_:library in 1288) [ClassicSimilarity], result of:
          0.022971334 = score(doc=1288,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 1288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1288)
      0.5 = coord(2/4)
    
    Abstract
    Metadata quality presents a challenge faced by many digital repositories. There is a variety of proposed quality assurance frameworks applied in repositories that are deployed in various contexts. Although studies report that there is an improvement of the quality of the metadata in many of the applications, the transfer of a successful approach from one application context to another has not been studied to a satisfactory extent. This article presents the empirical results of the application of a metadata quality assurance process that has been developed and successfully applied in an educational context (learning repositories) to 2 different application contexts to compare results with the previous application and assess its generalizability. More specifically, it reports results from the adaptation and application of this process in a library context (institutional repositories) and in a cultural context (digital cultural repositories). Initial empirical findings indicate that content providers seem to be gaining a better understanding of metadata when the proposed process is put in place and that the quality of the produced metadata records increases.
  9. Derrot, S.; Koskas, M.: My fair metadata : cataloging legal deposit Ebooks at the National Library of France (2016) 0.06
    0.05604878 = product of:
      0.11209756 = sum of:
        0.085297674 = weight(_text_:digital in 5140) [ClassicSimilarity], result of:
          0.085297674 = score(doc=5140,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 5140, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5140)
        0.026799891 = weight(_text_:library in 5140) [ClassicSimilarity], result of:
          0.026799891 = score(doc=5140,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 5140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5140)
      0.5 = coord(2/4)
    
    Abstract
    French law on digital legal deposit covers websites and online content as well as ebooks. It imposes no obligation to produce a bibliography, indexing being sufficient. But despite their innovative characteristics, ebooks are still books, and their metadata is closer to that of printed materials than to the web indexing. To set up an ebook deposit workflow, the BnF benefits from its experience with digital documents and its tradition of legal deposit. This is to present the questions that it faces when dealing with the cataloging of ebooks and the management of their metadata, and the solutions that are emerging.
  10. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.05
    0.053888097 = product of:
      0.107776195 = sum of:
        0.07461992 = weight(_text_:digital in 3895) [ClassicSimilarity], result of:
          0.07461992 = score(doc=3895,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 3895, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.033156272 = weight(_text_:library in 3895) [ClassicSimilarity], result of:
          0.033156272 = score(doc=3895,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25158736 = fieldWeight in 3895, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.5 = coord(2/4)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  11. Ashton, J.; Kent, C.: New approaches to subject indexing at the British Library (2017) 0.05
    0.05336667 = product of:
      0.10673334 = sum of:
        0.060314562 = weight(_text_:digital in 5158) [ClassicSimilarity], result of:
          0.060314562 = score(doc=5158,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 5158, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5158)
        0.04641878 = weight(_text_:library in 5158) [ClassicSimilarity], result of:
          0.04641878 = score(doc=5158,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.3522223 = fieldWeight in 5158, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5158)
      0.5 = coord(2/4)
    
    Abstract
    The constantly changing metadata landscape means that libraries need to re-think their approach to standards and subject analysis, to enable the discovery of vast areas of both print and digital content. This article presents a case study from the British Library that assesses the feasibility of adopting FAST (Faceted Application of Subject Terminology) to selectively extend the scope of subject indexing of current and legacy content, or implement FAST as a replacement for all LCSH in current cataloging workflows.
    Object
    British Library
  12. DC-2013: International Conference on Dublin Core and Metadata Applications : Online Proceedings (2013) 0.05
    0.052134253 = product of:
      0.06951234 = sum of:
        0.034465462 = weight(_text_:digital in 1076) [ClassicSimilarity], result of:
          0.034465462 = score(doc=1076,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
        0.015314223 = weight(_text_:library in 1076) [ClassicSimilarity], result of:
          0.015314223 = score(doc=1076,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=1076)
        0.019732658 = product of:
          0.039465316 = sum of:
            0.039465316 = weight(_text_:project in 1076) [ClassicSimilarity], result of:
              0.039465316 = score(doc=1076,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.18654276 = fieldWeight in 1076, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1076)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The collocated conferences for DC-2013 and iPRES-2013 in Lisbon attracted 392 participants from over 37 countries. In addition to the Tuesday through Thursday conference days comprised of peer-reviewed paper and special sessions, 223 participants attended pre-conference tutorials and 246 participated in post-conference workshops for the collocated events. The peer-reviewed papers and presentations are available on the conference website Presentation page (URLs above). In sum, it was a great conference. In addition to links to PDFs of papers, project reports and posters (and their associated presentations), the published proceedings include presentation PDFs for the following: KEYNOTES Darling, we need to talk - Gildas Illien TUTORIALS -- Ivan Herman: "Introduction to Linked Open Data (LOD)" -- Steven Miller: "Introduction to Ontology Concepts and Terminology" -- Kai Eckert: "Metadata Provenance" -- Daniel Garjio: "The W3C Provenance Ontology" SPECIAL SESSIONS -- "Application Profiles as an Alternative to OWL Ontologies" -- "Long-term Preservation and Governance of RDF Vocabularies (W3C Sponsored)" -- "Data Enrichment and Transformation in the LOD Context: Poor & Popular vs Rich & Lonely--Can't we achieve both?" -- "Why Schema.org?"
    Content
    FULL PAPERS Provenance and Annotations for Linked Data - Kai Eckert How Portable Are the Metadata Standards for Scientific Data? A Proposal for a Metadata Infrastructure - Jian Qin, Kai Li Lessons Learned in Implementing the Extended Date/Time Format in a Large Digital Library - Hannah Tarver, Mark Phillips Towards the Representation of Chinese Traditional Music: A State of the Art Review of Music Metadata Standards - Mi Tian, György Fazekas, Dawn Black, Mark Sandler Maps and Gaps: Strategies for Vocabulary Design and Development - Diane Ileana Hillmann, Gordon Dunsire, Jon Phipps A Method for the Development of Dublin Core Application Profiles (Me4DCAP V0.1): Aescription - Mariana Curado Malta, Ana Alice Baptista Find and Combine Vocabularies to Design Metadata Application Profiles using Schema Registries and LOD Resources - Tsunagu Honma, Mitsuharu Nagamori, Shigeo Sugimoto Achieving Interoperability between the CARARE Schema for Monuments and Sites and the Europeana Data Model - Antoine Isaac, Valentine Charles, Kate Fernie, Costis Dallas, Dimitris Gavrilis, Stavros Angelis With a Focused Intent: Evolution of DCMI as a Research Community - Jihee Beak, Richard P. Smiraglia Metadata Capital in a Data Repository - Jane Greenberg, Shea Swauger, Elena Feinstein DC Metadata is Alive and Well - A New Standard for Education - Liddy Nevile Representation of the UNIMARC Bibliographic Data Format in Resource Description Framework - Gordon Dunsire, Mirna Willer, Predrag Perozic
  13. Mi, X.M.; Pollock, B.M.: Metadata schema to facilitate linked data for 3D digital models of cultural heritage collections : a University of South Florida Libraries case study (2018) 0.05
    0.05135564 = product of:
      0.10271128 = sum of:
        0.073112294 = weight(_text_:digital in 5171) [ClassicSimilarity], result of:
          0.073112294 = score(doc=5171,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.36980176 = fieldWeight in 5171, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=5171)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 5171) [ClassicSimilarity], result of:
              0.059197973 = score(doc=5171,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 5171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5171)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The University of South Florida Libraries house and provide access to a collection of cultural heritage and 3D digital models. In an effort to provide greater access to these collections, a linked data project has been implemented. A metadata schema for the 3D cultural heritage objects which uses linked data is an excellent way to share these collections with other repositories, thus gaining global exposure and access to these valuable resources. This article will share the process of building the 3D cultural heritage metadata model as well as an assessment of the model and recommendations for future linked data projects.
  14. Hider, P.: Information resource description : creating and managing metadata (2012) 0.05
    0.046881348 = product of:
      0.093762696 = sum of:
        0.07461992 = weight(_text_:digital in 2086) [ClassicSimilarity], result of:
          0.07461992 = score(doc=2086,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 2086, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2086)
        0.01914278 = weight(_text_:library in 2086) [ClassicSimilarity], result of:
          0.01914278 = score(doc=2086,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 2086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2086)
      0.5 = coord(2/4)
    
    Abstract
    An overview of the field of information organization that examines resource description as both a product and process of the contemporary digital environment. This timely book employs the unifying mechanism of the semantic web and the resource description framework to integrate the various traditions and practices of information and knowledge organization. Uniquely, it covers both the domain-specific traditions and practices and the practices of the 'metadata movement' through a single lens - that of resource description in the broadest, semantic web sense. This approach more readily accommodates coverage of the new Resource Description and Access (RDA) standard, which aims to move library cataloguing into the centre of the semantic web. The work surrounding RDA looks set to revolutionise the field of information organization, and this book will bring both the standard and its model and concepts into focus.
    LCSH
    Digital preservation ; Metadata
    Subject
    Digital preservation ; Metadata
  15. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.05
    0.04552724 = product of:
      0.09105448 = sum of:
        0.039787523 = weight(_text_:library in 3523) [ClassicSimilarity], result of:
          0.039787523 = score(doc=3523,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.30190483 = fieldWeight in 3523, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
        0.051266953 = product of:
          0.10253391 = sum of:
            0.10253391 = weight(_text_:project in 3523) [ClassicSimilarity], result of:
              0.10253391 = score(doc=3523,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.48465237 = fieldWeight in 3523, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3523)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  16. Stiller, J.; Olensky, M.; Petras, V.: ¬A framework for the evaluation of automatic metadata enrichments (2014) 0.04
    0.043557227 = product of:
      0.08711445 = sum of:
        0.060314562 = weight(_text_:digital in 1587) [ClassicSimilarity], result of:
          0.060314562 = score(doc=1587,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 1587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1587)
        0.026799891 = weight(_text_:library in 1587) [ClassicSimilarity], result of:
          0.026799891 = score(doc=1587,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 1587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1587)
      0.5 = coord(2/4)
    
    Abstract
    Automatic enrichment of collections connects data to vocabularies, which supports the contextualization of content and adds searchable text to metadata. The paper introduces a framework of four dimensions (frequency, coverage, relevance and error rate) that measure both the suitability of the enrichment for the object and the enrichments' contribution to search success. To verify the framework, it is applied to the evaluation of automatic enrichments in the digital library Europeana. The analysis of 100 result sets and their corresponding queries (1,121 documents total) shows the framework is a valuable tool for guiding enrichments and determining the value of enrichment efforts.
  17. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.04
    0.042943195 = product of:
      0.08588639 = sum of:
        0.043081827 = weight(_text_:digital in 3870) [ClassicSimilarity], result of:
          0.043081827 = score(doc=3870,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 3870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
        0.042804558 = weight(_text_:library in 3870) [ClassicSimilarity], result of:
          0.042804558 = score(doc=3870,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.32479787 = fieldWeight in 3870, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.5 = coord(2/4)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  18. Khoo, M.J.; Ahn, J.-w.; Binding, C.; Jones, H.J.; Lin, X.; Massam, D.; Tudhope, D.: Augmenting Dublin Core digital library metadata with Dewey Decimal Classification (2015) 0.04
    0.042122573 = product of:
      0.084245145 = sum of:
        0.068930924 = weight(_text_:digital in 2320) [ClassicSimilarity], result of:
          0.068930924 = score(doc=2320,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.34865242 = fieldWeight in 2320, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
        0.015314223 = weight(_text_:library in 2320) [ClassicSimilarity], result of:
          0.015314223 = score(doc=2320,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.11620321 = fieldWeight in 2320, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=2320)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The purpose of this paper is to describe a new approach to a well-known problem for digital libraries, how to search across multiple unrelated libraries with a single query. Design/methodology/approach - The approach involves creating new Dewey Decimal Classification terms and numbers from existing Dublin Core records. In total, 263,550 records were harvested from three digital libraries. Weighted key terms were extracted from the title, description and subject fields of each record. Ranked DDC classes were automatically generated from these key terms by considering DDC hierarchies via a series of filtering and aggregation stages. A mean reciprocal ranking evaluation compared a sample of 49 generated classes against DDC classes created by a trained librarian for the same records. Findings - The best results combined weighted key terms from the title, description and subject fields. Performance declines with increased specificity of DDC level. The results compare favorably with similar studies. Research limitations/implications - The metadata harvest required manual intervention and the evaluation was resource intensive. Future research will look at evaluation methodologies that take account of issues of consistency and ecological validity. Practical implications - The method does not require training data and is easily scalable. The pipeline can be customized for individual use cases, for example, recall or precision enhancing. Social implications - The approach can provide centralized access to information from multiple domains currently provided by individual digital libraries. Originality/value - The approach addresses metadata normalization in the context of web resources. The automatic classification approach accounts for matches within hierarchies, aggregating lower level matches to broader parents and thus approximates the practices of a human cataloger.
  19. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.04
    0.042041123 = product of:
      0.084082246 = sum of:
        0.060314562 = weight(_text_:digital in 3283) [ClassicSimilarity], result of:
          0.060314562 = score(doc=3283,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.023767682 = product of:
          0.047535364 = sum of:
            0.047535364 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.047535364 = score(doc=3283,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  20. Managing metadata in web-scale discovery systems (2016) 0.04
    0.040204063 = product of:
      0.080408126 = sum of:
        0.034465462 = weight(_text_:digital in 3336) [ClassicSimilarity], result of:
          0.034465462 = score(doc=3336,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 3336, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
        0.045942668 = weight(_text_:library in 3336) [ClassicSimilarity], result of:
          0.045942668 = score(doc=3336,freq=18.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.34860963 = fieldWeight in 3336, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=3336)
      0.5 = coord(2/4)
    
    Abstract
    This book shows you how to harness the power of linked data and web-scale discovery systems to manage and link widely varied content across your library collection. Libraries are increasingly using web-scale discovery systems to help clients find a wide assortment of library materials, including books, journal articles, special collections, archival collections, videos, music and open access collections. Depending on the library material catalogued, the discovery system might need to negotiate different metadata standards, such as AACR, RDA, RAD, FOAF, VRA Core, METS, MODS, RDF and more. In Managing Metadata in Web-Scale Discovery Systems, editor Louise Spiteri and a range of international experts show you how to: * maximize the effectiveness of web-scale discovery systems * provide a smooth and seamless discovery experience to your users * help users conduct searches that yield relevant results * manage the sheer volume of items to which you can provide access, so your users can actually find what they need * maintain shared records that reflect the needs, languages, and identities of culturally and ethnically varied communities * manage metadata both within, across, and outside, library discovery tools by converting your library metadata to linked open data that all systems can access * manage user generated metadata from external services such as Goodreads and LibraryThing * mine user generated metadata to better serve your users in areas such as collection development or readers' advisory. The book will be essential reading for cataloguers, technical services and systems librarians and library and information science students studying modules on metadata, cataloguing, systems design, data management, and digital libraries. The book will also be of interest to those managing metadata in archives, museums and other cultural heritage institutions.
    Content
    1. Introduction: the landscape of web-scale discovery - Louise Spiteri 2. Sharing metadata across discovery systems - Marshall Breeding, Angela Kroeger and Heather Moulaison Sandy 3. Managing linked open data across discovery systems - Ali Shiri and Danoosh Davoodi 4. Redefining library resources in discovery systems - Christine DeZelar-Tiedman 5. Managing volume in discovery systems - Aaron Tay 6. Managing outsourced metadata in discovery systems - Laurel Tarulli 7. Managing user-generated metadata in discovery systems - Louise Spiteri
    LCSH
    Online library catalogs
    Subject
    Online library catalogs

Authors

Languages

  • e 85
  • d 4
  • More… Less…

Types

Subjects