Search (136 results, page 1 of 7)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2010 TO 2020}
  1. Hardesty, J.L.; Young, J.B.: ¬The semantics of metadata : Avalon Media System and the move to RDF (2017) 0.13
    0.133226 = product of:
      0.17763469 = sum of:
        0.10339639 = weight(_text_:digital in 3896) [ClassicSimilarity], result of:
          0.10339639 = score(doc=3896,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.52297866 = fieldWeight in 3896, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.022971334 = weight(_text_:library in 3896) [ClassicSimilarity], result of:
          0.022971334 = score(doc=3896,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 3896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3896)
        0.051266953 = product of:
          0.10253391 = sum of:
            0.10253391 = weight(_text_:project in 3896) [ClassicSimilarity], result of:
              0.10253391 = score(doc=3896,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.48465237 = fieldWeight in 3896, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3896)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Avalon Media System (Avalon) provides access and management for digital audio and video collections in libraries and archives. The open source project is led by the libraries of Indiana University Bloomington and Northwestern University and is funded in part by grants from The Andrew W. Mellon Foundation and Institute of Museum and Library Services. Avalon is based on the Samvera Community (formerly Hydra Project) software stack and uses Fedora as the digital repository back end. The Avalon project team is in the process of migrating digital repositories from Fedora 3 to Fedora 4 and incorporating metadata statements using the Resource Description Framework (RDF) instead of XML files accompanying the digital objects in the repository. The Avalon team has worked on the migration path for technical metadata and is now working on the migration paths for structural metadata (PCDM) and descriptive metadata (from MODS XML to RDF). This paper covers the decisions made to begin using RDF for software development and offers a window into how Semantic Web technology functions in the real world.
  2. Stevens, G.: New metadata recipes for old cookbooks : creating and analyzing a digital collection using the HathiTrust Research Center Portal (2017) 0.12
    0.12459625 = product of:
      0.16612834 = sum of:
        0.0963339 = weight(_text_:digital in 3897) [ClassicSimilarity], result of:
          0.0963339 = score(doc=3897,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4872566 = fieldWeight in 3897, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3897)
        0.027071979 = weight(_text_:library in 3897) [ClassicSimilarity], result of:
          0.027071979 = score(doc=3897,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 3897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3897)
        0.04272246 = product of:
          0.08544492 = sum of:
            0.08544492 = weight(_text_:project in 3897) [ClassicSimilarity], result of:
              0.08544492 = score(doc=3897,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.40387696 = fieldWeight in 3897, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3897)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Early American Cookbooks digital project is a case study in analyzing collections as data using HathiTrust and the HathiTrust Research Center (HTRC) Portal. The purposes of the project are to create a freely available, searchable collection of full-text early American cookbooks within the HathiTrust Digital Library, to offer an overview of the scope and contents of the collection, and to analyze trends and patterns in the metadata and the full text of the collection. The digital project has two basic components: a collection of 1450 full-text cookbooks published in the United States between 1800 and 1920 and a website to present a guide to the collection and the results of the analysis. This article will focus on the workflow for analyzing the metadata and the full-text of the collection. The workflow will cover: 1) creating a searchable public collection of full-text titles within the HathiTrust Digital Library and uploading it to the HTRC Portal, 2) analyzing and visualizing legacy MARC data for the collection using MarcEdit, OpenRefine and Tableau, and 3) using the text analysis tools in the HTRC Portal to look for trends and patterns in the full text of the collection.
  3. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.10
    0.103954405 = product of:
      0.13860588 = sum of:
        0.034465462 = weight(_text_:digital in 3608) [ClassicSimilarity], result of:
          0.034465462 = score(doc=3608,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.037512034 = weight(_text_:library in 3608) [ClassicSimilarity], result of:
          0.037512034 = score(doc=3608,freq=12.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28463858 = fieldWeight in 3608, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.06662838 = sum of:
          0.039465316 = weight(_text_:project in 3608) [ClassicSimilarity], result of:
            0.039465316 = score(doc=3608,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.18654276 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
          0.027163066 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
            0.027163066 = score(doc=3608,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.15476047 = fieldWeight in 3608, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3608)
      0.75 = coord(3/4)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  4. Bartczak, J.; Glendon, I.: Python, Google Sheets, and the Thesaurus for Graphic Materials for efficient metadata project workflows (2017) 0.09
    0.09426196 = product of:
      0.1256826 = sum of:
        0.073112294 = weight(_text_:digital in 3893) [ClassicSimilarity], result of:
          0.073112294 = score(doc=3893,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.36980176 = fieldWeight in 3893, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3893)
        0.022971334 = weight(_text_:library in 3893) [ClassicSimilarity], result of:
          0.022971334 = score(doc=3893,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 3893, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3893)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 3893) [ClassicSimilarity], result of:
              0.059197973 = score(doc=3893,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 3893, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3893)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In 2017, the University of Virginia (U.Va.) will launch a two year initiative to celebrate the bicentennial anniversary of the University's founding in 1819. The U.Va. Library is participating in this event by digitizing some 20,000 photographs and negatives that document student life on the U.Va. grounds in the 1960s and 1970s. Metadata librarians and archivists are well-versed in the challenges associated with generating digital content and accompanying description within the context of limited resources. This paper describes how technology and new approaches to metadata design have enabled the University of Virginia's Metadata Analysis and Design Department to rapidly and successfully generate accurate description for these digital objects. Python's pandas module improves efficiency by cleaning and repurposing data recorded at digitization, while the lxml module builds MODS XML programmatically from CSV tables. A simplified technique for subject heading selection and assignment in Google Sheets provides a collaborative environment for streamlined metadata creation and data quality control.
  5. Dowding, H.; Gengenbach, M.; Graham, B.; Meister, S.; Moran, J.; Peltzman, S.; Seifert, J.; Waugh, D.: OSS4EVA: using open-source tools to fulfill digital preservation requirements (2016) 0.09
    0.09171251 = product of:
      0.12228335 = sum of:
        0.086163655 = weight(_text_:digital in 3200) [ClassicSimilarity], result of:
          0.086163655 = score(doc=3200,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4358155 = fieldWeight in 3200, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3200)
        0.01914278 = weight(_text_:library in 3200) [ClassicSimilarity], result of:
          0.01914278 = score(doc=3200,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 3200, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3200)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 3200) [ClassicSimilarity], result of:
              0.033953834 = score(doc=3200,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 3200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3200)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper builds on the findings of a workshop held at the 2015 International Conference on Digital Preservation (iPRES), entitled, "Using Open-Source Tools to Fulfill Digital Preservation Requirements" (OSS4PRES hereafter). This day-long workshop brought together participants from across the library and archives community, including practitioners proprietary vendors, and representatives from open-source projects. The resulting conversations were surprisingly revealing: while OSS' significance within the preservation landscape was made clear, participants noted that there are a number of roadblocks that discourage or altogether prevent its use in many organizations. Overcoming these challenges will be necessary to further widespread, sustainable OSS adoption within the digital preservation community. This article will mine the rich discussions that took place at OSS4PRES to (1) summarize the workshop's key themes and major points of debate, (2) provide a comprehensive analysis of the opportunities, gaps, and challenges that using OSS entails at a philosophical, institutional, and individual level, and (3) offer a tangible set of recommendations for future work designed to broaden community engagement and enhance the sustainability of open source initiatives, drawing on both participants' experience as well as additional research.
    Date
    28.10.2016 18:22:33
  6. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.09
    0.09123495 = product of:
      0.121646605 = sum of:
        0.060314562 = weight(_text_:digital in 1717) [ClassicSimilarity], result of:
          0.060314562 = score(doc=1717,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
        0.026799891 = weight(_text_:library in 1717) [ClassicSimilarity], result of:
          0.026799891 = score(doc=1717,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1717) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1717,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1717)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The German subject headings authority file (Schlagwortnormdatei/SWD) provides a broad controlled vocabulary for indexing documents of all subjects. Traditionally used for intellectual subject cataloguing primarily of books the Deutsche Nationalbibliothek (DNB, German National Library) has been working on developping and implementing procedures for automated assignment of subject headings for online publications. This project, its results and problems are sketched in the paper.
    Content
    Beitrag für die Tagung: Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn. Vgl.: http://http://www.nlib.ee/index.php?id=17763.
  7. Mayo, D.; Bowers, K.: ¬The devil's shoehorn : a case study of EAD to ArchivesSpace migration at a large university (2017) 0.08
    0.0846572 = product of:
      0.112876266 = sum of:
        0.043081827 = weight(_text_:digital in 3373) [ClassicSimilarity], result of:
          0.043081827 = score(doc=3373,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 3373, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.027071979 = weight(_text_:library in 3373) [ClassicSimilarity], result of:
          0.027071979 = score(doc=3373,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 3373, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3373)
        0.04272246 = product of:
          0.08544492 = sum of:
            0.08544492 = weight(_text_:project in 3373) [ClassicSimilarity], result of:
              0.08544492 = score(doc=3373,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.40387696 = fieldWeight in 3373, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3373)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    A band of archivists and IT professionals at Harvard took on a project to convert nearly two million descriptions of archival collection components from marked-up text into the ArchivesSpace archival metadata management system. Starting in the mid-1990s, Harvard was an alpha implementer of EAD, an SGML (later XML) text markup language for electronic inventories, indexes, and finding aids that archivists use to wend their way through the sometimes quirky filing systems that bureaucracies establish for their records or the utter chaos in which some individuals keep their personal archives. These pathfinder documents, designed to cope with messy reality, can themselves be difficult to classify. Portions of them are rigorously structured, while other parts are narrative. Early documents predate the establishment of the standard; many feature idiosyncratic encoding that had been through several machine conversions, while others were freshly encoded and fairly consistent. In this paper, we will cover the practical and technical challenges involved in preparing a large (900MiB) corpus of XML for ingest into an open-source archival information system (ArchivesSpace). This case study will give an overview of the project, discuss problem discovery and problem solving, and address the technical challenges, analysis, solutions, and decisions and provide information on the tools produced and lessons learned. The authors of this piece are Kate Bowers, Collections Services Archivist for Metadata, Systems, and Standards at the Harvard University Archive, and Dave Mayo, a Digital Library Software Engineer for Harvard's Library and Technology Services. Kate was heavily involved in both metadata analysis and later problem solving, while Dave was the sole full-time developer assigned to the migration project.
  8. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.08
    0.083210856 = product of:
      0.16642171 = sum of:
        0.1266342 = weight(_text_:digital in 3655) [ClassicSimilarity], result of:
          0.1266342 = score(doc=3655,freq=12.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6405154 = fieldWeight in 3655, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
        0.039787523 = weight(_text_:library in 3655) [ClassicSimilarity], result of:
          0.039787523 = score(doc=3655,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.30190483 = fieldWeight in 3655, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
      0.5 = coord(2/4)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
    Object
    Digital Public Library of America
  9. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.08
    0.07761024 = product of:
      0.10348032 = sum of:
        0.051698197 = weight(_text_:digital in 1967) [ClassicSimilarity], result of:
          0.051698197 = score(doc=1967,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.022971334 = weight(_text_:library in 1967) [ClassicSimilarity], result of:
          0.022971334 = score(doc=1967,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.028810784 = product of:
          0.05762157 = sum of:
            0.05762157 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.05762157 = score(doc=1967,freq=4.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  10. Frank, I.: Fortschritt durch Rückschritt : vom Bibliothekskatalog zum Denkwerkzeug. Eine Idee (2016) 0.06
    0.06405575 = product of:
      0.1281115 = sum of:
        0.097483054 = weight(_text_:digital in 3982) [ClassicSimilarity], result of:
          0.097483054 = score(doc=3982,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.493069 = fieldWeight in 3982, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0625 = fieldNorm(doc=3982)
        0.030628446 = weight(_text_:library in 3982) [ClassicSimilarity], result of:
          0.030628446 = score(doc=3982,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.23240642 = fieldWeight in 3982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=3982)
      0.5 = coord(2/4)
    
    Abstract
    Der Text zeigt anhand einer essayistisch selektiven Rückschau in die Zeit vor den Digital Humanities bibliotheks- und informationswissenschaftliche Ansätze zur Entwicklung hypertextueller Werkzeuge für Bibliographie-Verwaltung und Strukturierung des wissenschaftlichen Diskurses - eine zukunftsweisende Idee für eine digitale Geisteswissenschaft zur Unterstützung geisteswissenschaftlicher Denkarbeit jenseits von reinem 'distant thinking'.
    Content
    Beitrag in einerm Schwerpunkt "Post-Digital Humanities". Vgl.: http://libreas.eu/ausgabe30/frank/.
    Source
    LIBREAS: Library ideas. no.30, 2016
  11. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.06
    0.05592901 = product of:
      0.07457201 = sum of:
        0.034465462 = weight(_text_:digital in 3335) [ClassicSimilarity], result of:
          0.034465462 = score(doc=3335,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.17432621 = fieldWeight in 3335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=3335)
        0.026525015 = weight(_text_:library in 3335) [ClassicSimilarity], result of:
          0.026525015 = score(doc=3335,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20126988 = fieldWeight in 3335, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.03125 = fieldNorm(doc=3335)
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
              0.027163066 = score(doc=3335,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Das Journal, das sich laut Zusatz zum Hauptsachtitel thematisch mit "second order cybernetics, autopoiesis and cyber-semiotics" beschäftigt, existiert seit 1992/93 als Druckausgabe. Seit 1998 (Jahrgang 5, Heft 1) wird es parallel kostenpflichtig elektronisch im Paket über den Verlag Imprint Academic in Exeter angeboten. Das Konzept Information wird dort aufgrund der Ausrichtung, die man als theoretischen Beitrag zu den Digital Humanities (avant la lettre) ansehen könnte, regelmäßig behandelt. Insbesondere die phänomenologisch und mathematisch fundierte Semiotik von Charles Sanders Peirce taucht in diesem Zusammenhang immer wieder auf. Dabei spielt stets die Verbindung zur Praxis, vor allem im Bereich Library- and Information Science (LIS), eine große Rolle, die man auch bei Brier selbst, der in seinem Hauptwerk "Cybersemiotics" die Peirceschen Zeichenkategorien unter anderem auf die bibliothekarische Tätigkeit des Indexierens anwendet,5 beobachten kann. Die Ausgabe 1/ 2015 der Zeitschrift fragt nun "What underlines Information?" und beinhaltet unter anderem Artikel zum Entwurf einer Philosophie der Information des Chinesen Wu Kun sowie zu Peirce und Spencer Brown. Die acht Kurzartikel zum Informationsbegriff in der Bibliotheks- und Informationswissenschaft wurden von den Thellefsen-Brüdern (Torkild und Martin) sowie Bent Sørensen, die auch selbst gemeinsam einen der Kommentare verfasst haben.
    Source
    LIBREAS: Library ideas. no.30, 2016
  12. Neumann, M.; Steinberg, J.; Schaer, P.: Web-ccraping for non-programmers : introducing OXPath for digital library metadata harvesting (2017) 0.05
    0.053888097 = product of:
      0.107776195 = sum of:
        0.07461992 = weight(_text_:digital in 3895) [ClassicSimilarity], result of:
          0.07461992 = score(doc=3895,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.37742734 = fieldWeight in 3895, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
        0.033156272 = weight(_text_:library in 3895) [ClassicSimilarity], result of:
          0.033156272 = score(doc=3895,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.25158736 = fieldWeight in 3895, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3895)
      0.5 = coord(2/4)
    
    Abstract
    Building up new collections for digital libraries is a demanding task. Available data sets have to be extracted which is usually done with the help of software developers as it involves custom data handlers or conversion scripts. In cases where the desired data is only available on the data provider's website custom web scrapers are needed. This may be the case for small to medium-size publishers, research institutes or funding agencies. As data curation is a typical task that is done by people with a library and information science background, these people are usually proficient with XML technologies but are not full-stack programmers. Therefore we would like to present a web scraping tool that does not demand the digital library curators to program custom web scrapers from scratch. We present the open-source tool OXPath, an extension of XPath, that allows the user to define data to be extracted from websites in a declarative way. By taking one of our own use cases as an example, we guide you in more detail through the process of creating an OXPath wrapper for metadata harvesting. We also point out some practical things to consider when creating a web scraper (with OXPath). On top of that, we also present a syntax highlighting plugin for the popular text editor Atom that we developed to further support OXPath users and to simplify the authoring process.
  13. Oßwald, A.; Weisbrod, D.: Öffentliche Bibliotheken als Partner bei der Archivierung persönlicher digitaler Materialien (2017) 0.05
    0.049107663 = product of:
      0.09821533 = sum of:
        0.060314562 = weight(_text_:digital in 3999) [ClassicSimilarity], result of:
          0.060314562 = score(doc=3999,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.30507088 = fieldWeight in 3999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3999)
        0.03790077 = weight(_text_:library in 3999) [ClassicSimilarity], result of:
          0.03790077 = score(doc=3999,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 3999, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3999)
      0.5 = coord(2/4)
    
    Abstract
    Bislang gibt es zum Thema "Personal Digital Archiving" (PDA) nur wenige deutschsprachige Informationen und Handlungsanleitungen. In den USA haben sich hingegen die Library of Congress und die American Library Association des Themas angenommen. Der Beitrag erläutert die Entstehung, das Konzept und die Zielsetzung von PDA und skizziert die Möglichkeit, PDA zu einem Angebot Öffentlicher Bibliotheken zu entwickeln. Beispielhaft greift er dabei auch auf Erfahrungen aus einem Projekt zurück, das die TH Köln in Kooperation mit der Stadtbibliothek Köln durchführte.
  14. Lee, Y.Y.; Yang, S.Q.: Folksonomies as subject access : a survey of tagging in library online catalogs and discovery layers (2012) 0.05
    0.045742862 = product of:
      0.091485724 = sum of:
        0.051698197 = weight(_text_:digital in 309) [ClassicSimilarity], result of:
          0.051698197 = score(doc=309,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 309, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=309)
        0.039787523 = weight(_text_:library in 309) [ClassicSimilarity], result of:
          0.039787523 = score(doc=309,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.30190483 = fieldWeight in 309, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=309)
      0.5 = coord(2/4)
    
    Abstract
    This paper describes a survey on how system vendors and libraries handled tagging in OPACs and discovery layers. Tags are user added subject metadata, also called folksonomies. This survey also investigated user behavior when they face the possibility to tag. The findings indicate that legacy/classic systems have no tagging capability. About 47% of the discovery tools provide tagging function. About 49% of the libraries that have a system with tagging capability have turned the tagging function on in their OPACs and discovery tools. Only 40% of the libraries that turned tagging on actually utilized user added subject metadata as access point to collections. Academic library users are less active in tagging than public library users.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  15. Edmunds, J.: Roadmap to nowhere : BIBFLOW, BIBFRAME, and linked data for libraries (2017) 0.05
    0.04552724 = product of:
      0.09105448 = sum of:
        0.039787523 = weight(_text_:library in 3523) [ClassicSimilarity], result of:
          0.039787523 = score(doc=3523,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.30190483 = fieldWeight in 3523, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=3523)
        0.051266953 = product of:
          0.10253391 = sum of:
            0.10253391 = weight(_text_:project in 3523) [ClassicSimilarity], result of:
              0.10253391 = score(doc=3523,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.48465237 = fieldWeight in 3523, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3523)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    On December 12, 2016, Carl Stahmer and MacKenzie Smith presented at the CNI Members Fall Meeting about the BIBFLOW project, self-described on Twitter as "a two-year project of the UC Davis University Library and Zepheira investigating the future of library technical services." In her opening remarks, Ms. Smith, University Librarian at UC Davis, stated that one of the goals of the project was to devise a roadmap "to get from where we are today, which is kind of the 1970s with a little lipstick on it, to 2020, which is where we're going to be very soon." The notion that where libraries are today is somehow behind the times is one of the commonly heard rationales behind a move to linked data. Stated more precisely: - Libraries devote considerable time and resources to producing high-quality bibliographic metadata - This metadata is stored in unconnected silos - This metadata is in a format (MARC) that is incompatible with technologies of the emerging Semantic Web - The visibility of library metadata is diminished as a result of the two points above Are these assertions true? If yes, is linked data the solution?
  16. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.04
    0.042943195 = product of:
      0.08588639 = sum of:
        0.043081827 = weight(_text_:digital in 3870) [ClassicSimilarity], result of:
          0.043081827 = score(doc=3870,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 3870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
        0.042804558 = weight(_text_:library in 3870) [ClassicSimilarity], result of:
          0.042804558 = score(doc=3870,freq=10.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.32479787 = fieldWeight in 3870, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.5 = coord(2/4)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  17. Suchowolec, K.; Lang, C.; Schneider, R.: Re-designing online terminology resources for German grammar (2016) 0.04
    0.042796366 = product of:
      0.08559273 = sum of:
        0.060926907 = weight(_text_:digital in 3108) [ClassicSimilarity], result of:
          0.060926907 = score(doc=3108,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 3108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3108)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 3108) [ClassicSimilarity], result of:
              0.049331643 = score(doc=3108,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 3108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3108)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The compilation of terminological vocabularies plays a central role in the organization and retrieval of scientific texts. Both simple keyword lists as well as sophisticated modellings of relationships between terminological concepts can make a most valuable contribution to the analysis, classification, and finding of appropriate digital documents, either on the Web or within local repositories. This seems especially true for long-established scientific fields with various theoretical and historical branches, such as linguistics, where the use of terminology within documents from different origins is sometimes far from being consistent. In this short paper, we report on the early stages of a project that aims at the re-design of an existing domain-specific KOS for grammatical content grammis. In particular, we deal with the terminological part of grammis and present the state-of-the-art of this online resource as well as the key re-design principles. Further, we propose questions regarding ramifications of the Linked Open Data and Semantic Web approaches for our re-design decisions.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  18. Danskin, A.: RDA implementation and application : British Library (2014) 0.04
    0.04139024 = product of:
      0.08278048 = sum of:
        0.043315165 = weight(_text_:library in 1562) [ClassicSimilarity], result of:
          0.043315165 = score(doc=1562,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.32867232 = fieldWeight in 1562, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=1562)
        0.039465316 = product of:
          0.07893063 = sum of:
            0.07893063 = weight(_text_:project in 1562) [ClassicSimilarity], result of:
              0.07893063 = score(doc=1562,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.37308553 = fieldWeight in 1562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1562)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The British Library implemented the new international cataloguing standard RDA in April 2013. The paper describes the reasons for the change, the project organization, the necessary adaptations to the systems and the training programs. Altogether, 227 staff were trained. Productivity levels by now are comparable with the levels for AACR2. However, there was a tendency to spend too much time on authority control.
  19. Manguinhas, H.; Charles, V.; Isaac, A.; Miles, T.; Lima, A.; Neroulidis, A.; Ginouves, V.; Atsidis, D.; Hildebrand, M.; Brinkerink, M.; Gordea, S.: Linking subject labels in cultural heritage metadata to MIMO vocabulary using CultuurLink (2016) 0.04
    0.04064859 = product of:
      0.08129718 = sum of:
        0.051698197 = weight(_text_:digital in 3107) [ClassicSimilarity], result of:
          0.051698197 = score(doc=3107,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.26148933 = fieldWeight in 3107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=3107)
        0.029598987 = product of:
          0.059197973 = sum of:
            0.059197973 = weight(_text_:project in 3107) [ClassicSimilarity], result of:
              0.059197973 = score(doc=3107,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.27981415 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Europeana Sounds project aims to increase the amount of cultural audio content in Europeana. It also strongly focuses on enriching the metadata records that are aggregated by Europeana. To provide metadata to Europeana, Data Providers are asked to convert their records from the format and model they use internally to a specific profile of the Europeana Data Model (EDM) for sound resources. These metadata include subjects, which typically use a vocabulary internal to each partner.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  20. Woolcott, L.; Payant, A.; Skindelien, S.: Partnering for discoverability : Knitting archival finding aids to digitized material using a low tech digital content linking process (2016) 0.04
    0.040034845 = product of:
      0.08006969 = sum of:
        0.060926907 = weight(_text_:digital in 3198) [ClassicSimilarity], result of:
          0.060926907 = score(doc=3198,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 3198, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3198)
        0.01914278 = weight(_text_:library in 3198) [ClassicSimilarity], result of:
          0.01914278 = score(doc=3198,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 3198, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3198)
      0.5 = coord(2/4)
    
    Abstract
    As libraries continue to ramp up digitization efforts for unique archival and special collections material, the segregation of archival finding aids from their digitized counterparts presents an accumulating discoverability problem for both patrons and library staff. For Utah State University (USU) Libraries, it became evident that a system was necessary to connect both new and legacy finding aids with their digitized content to improve use and discoverability. Following a cross-departmental workflow analysis involving the Special Collections, Cataloging and Metadata, and Digital Initiatives departments, a process was created for semi-automating the batch linking of item and folder level entries in EAD finding aids to the corresponding digitized material in CONTENTdm. In addition to the obvious benefit of linking content, this cross-departmental process also allowed for the implementation of persistent identifiers and the enhancement of finding aids using the more robust metadata that accompanies digitized material. This article will provide a detailed overview of the process, as well as describe how the three departments at USU have worked together to identify key stakeholders, develop the procedures, and address future developments.

Languages

  • e 82
  • d 50
  • i 2
  • a 1
  • f 1
  • More… Less…