Search (402 results, page 1 of 21)

  • × year_i:[2020 TO 2030}
  1. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.14
    0.14340337 = product of:
      0.19120449 = sum of:
        0.10446788 = weight(_text_:digital in 5720) [ClassicSimilarity], result of:
          0.10446788 = score(doc=5720,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.5283983 = fieldWeight in 5720, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
        0.03790077 = weight(_text_:library in 5720) [ClassicSimilarity], result of:
          0.03790077 = score(doc=5720,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 5720, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
        0.048835836 = product of:
          0.09767167 = sum of:
            0.09767167 = weight(_text_:project in 5720) [ClassicSimilarity], result of:
              0.09767167 = score(doc=5720,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.4616698 = fieldWeight in 5720, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
  2. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.14
    0.13541423 = product of:
      0.1805523 = sum of:
        0.085297674 = weight(_text_:digital in 1139) [ClassicSimilarity], result of:
          0.085297674 = score(doc=1139,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 1139, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
        0.04641878 = weight(_text_:library in 1139) [ClassicSimilarity], result of:
          0.04641878 = score(doc=1139,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.3522223 = fieldWeight in 1139, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
        0.048835836 = product of:
          0.09767167 = sum of:
            0.09767167 = weight(_text_:project in 1139) [ClassicSimilarity], result of:
              0.09767167 = score(doc=1139,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.4616698 = fieldWeight in 1139, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  3. Post, C.; Henry, T.; Nunnally, K.; Lanham, C.: ¬A colossal catalog adventure : representing Indie video games and game creators in library catalogs (2023) 0.12
    0.11829795 = product of:
      0.1577306 = sum of:
        0.085297674 = weight(_text_:digital in 1182) [ClassicSimilarity], result of:
          0.085297674 = score(doc=1182,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1182)
        0.03790077 = weight(_text_:library in 1182) [ClassicSimilarity], result of:
          0.03790077 = score(doc=1182,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.28758827 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1182)
        0.034532152 = product of:
          0.069064304 = sum of:
            0.069064304 = weight(_text_:project in 1182) [ClassicSimilarity], result of:
              0.069064304 = score(doc=1182,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.32644984 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Significant changes in how video games are made and distributed require catalogers to critically reflect on existing approaches for representing games in library catalogs. Digital distribution channels are quickly supplanting releases of games on physical media while also facilitating a dramatic increase in independent-made games that incorporate novel subject matter and styles of gameplay. This paper presents an action research project cataloging 18 independently-made digital games from a small publisher, Choice of Games, considering how descriptive cataloging, subject cataloging, and name authority control for these works compares to mainstream video games.
  4. Oberbichler, S.; Boros, E.; Doucet, A.; Marjanen, J.; Pfanzelter, E.; Rautiainen, J.; Toivonen, H.; Tolonen, M.: Integrated interdisciplinary workflows for research on historical newspapers : perspectives from humanities scholars, computer scientists, and librarians (2022) 0.11
    0.105106875 = product of:
      0.1401425 = sum of:
        0.0963339 = weight(_text_:digital in 465) [ClassicSimilarity], result of:
          0.0963339 = score(doc=465,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4872566 = fieldWeight in 465, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=465)
        0.01914278 = weight(_text_:library in 465) [ClassicSimilarity], result of:
          0.01914278 = score(doc=465,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=465)
        0.024665821 = product of:
          0.049331643 = sum of:
            0.049331643 = weight(_text_:project in 465) [ClassicSimilarity], result of:
              0.049331643 = score(doc=465,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.23317845 = fieldWeight in 465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=465)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This article considers the interdisciplinary opportunities and challenges of working with digital cultural heritage, such as digitized historical newspapers, and proposes an integrated digital hermeneutics workflow to combine purely disciplinary research approaches from computer science, humanities, and library work. Common interests and motivations of the above-mentioned disciplines have resulted in interdisciplinary projects and collaborations such as the NewsEye project, which is working on novel solutions on how digital heritage data is (re)searched, accessed, used, and analyzed. We argue that collaborations of different disciplines can benefit from a good understanding of the workflows and traditions of each of the disciplines involved but must find integrated approaches to successfully exploit the full potential of digitized sources. The paper is furthermore providing an insight into digital tools, methods, and hermeneutics in action, showing that integrated interdisciplinary research needs to build something in between the disciplines while respecting and understanding each other's expertise and expectations.
    Series
    JASIST special issue on digital humanities (DH): B. Infrastructures of DH
  5. Kord, A.: Evaluating metadata quality in LGBTQ+ digital community archives (2022) 0.10
    0.097339444 = product of:
      0.19467889 = sum of:
        0.13486744 = weight(_text_:digital in 1140) [ClassicSimilarity], result of:
          0.13486744 = score(doc=1140,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.6821592 = fieldWeight in 1140, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1140)
        0.059811447 = product of:
          0.11962289 = sum of:
            0.11962289 = weight(_text_:project in 1140) [ClassicSimilarity], result of:
              0.11962289 = score(doc=1140,freq=6.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.5654278 = fieldWeight in 1140, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1140)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This project evaluated metadata in digital LGBTQ+ community archives in order to determine its quality and how metadata quality effects the sustainability of digital community archives. This project uses a case study approach, using content analysis to evaluate metadata quality of three LGBTQ+ digital archives: Transas City, The History Project, and ONE Archives. Analysis found that the metadata in LGBTQ+ digital community archives is inconsistent and often only meets the minimum requirements for quality metadata. Further, this study concluded that professional guidelines and practices for metadata strip away the personality and uniqueness that is key to community archives success and purpose.
  6. Morris, V.: Automated language identification of bibliographic resources (2020) 0.08
    0.0819426 = product of:
      0.1638852 = sum of:
        0.030628446 = weight(_text_:library in 5749) [ClassicSimilarity], result of:
          0.030628446 = score(doc=5749,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.23240642 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.13325676 = sum of:
          0.07893063 = weight(_text_:project in 5749) [ClassicSimilarity], result of:
            0.07893063 = score(doc=5749,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.37308553 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
          0.054326132 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
            0.054326132 = score(doc=5749,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.30952093 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
      0.5 = coord(2/4)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  7. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.08
    0.07873185 = product of:
      0.1049758 = sum of:
        0.060926907 = weight(_text_:digital in 5617) [ClassicSimilarity], result of:
          0.060926907 = score(doc=5617,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.3081681 = fieldWeight in 5617, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.027071979 = weight(_text_:library in 5617) [ClassicSimilarity], result of:
          0.027071979 = score(doc=5617,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 5617, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
              0.033953834 = score(doc=5617,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 5617, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5617)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  8. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.07
    0.07294262 = product of:
      0.14588524 = sum of:
        0.045942668 = weight(_text_:library in 5996) [ClassicSimilarity], result of:
          0.045942668 = score(doc=5996,freq=8.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.34860963 = fieldWeight in 5996, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.09994257 = sum of:
          0.059197973 = weight(_text_:project in 5996) [ClassicSimilarity], result of:
            0.059197973 = score(doc=5996,freq=2.0), product of:
              0.21156175 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.050121464 = queryNorm
              0.27981415 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
          0.0407446 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
            0.0407446 = score(doc=5996,freq=2.0), product of:
              0.17551683 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050121464 = queryNorm
              0.23214069 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
      0.5 = coord(2/4)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  9. Hoeber, O.: ¬A study of visually linked keywords to support exploratory browsing in academic search (2022) 0.07
    0.07159196 = product of:
      0.14318392 = sum of:
        0.10339639 = weight(_text_:digital in 644) [ClassicSimilarity], result of:
          0.10339639 = score(doc=644,freq=8.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.52297866 = fieldWeight in 644, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=644)
        0.039787523 = weight(_text_:library in 644) [ClassicSimilarity], result of:
          0.039787523 = score(doc=644,freq=6.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.30190483 = fieldWeight in 644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=644)
      0.5 = coord(2/4)
    
    Abstract
    While the search interfaces used by common academic digital libraries provide easy access to a wealth of peer-reviewed literature, their interfaces provide little support for exploratory browsing. When faced with a complex search task (such as one that requires knowledge discovery), exploratory browsing is an important first step in an exploratory search process. To more effectively support exploratory browsing, we have designed and implemented a novel academic digital library search interface (KLink Search) with two new features: visually linked keywords and an interactive workspace. To study the potential value of these features, we have conducted a controlled laboratory study with 32 participants, comparing KLink Search to a baseline digital library search interface modeled after that used by IEEE Xplore. Based on subjective opinions, objective performance, and behavioral data, we show the value of adding lightweight visual and interactive features to academic digital library search interfaces to support exploratory browsing.
  10. Organisciak, P.; Schmidt, B.M.; Downie, J.S.: Giving shape to large digital libraries through exploratory data analysis (2022) 0.07
    0.069286 = product of:
      0.138572 = sum of:
        0.11560067 = weight(_text_:digital in 473) [ClassicSimilarity], result of:
          0.11560067 = score(doc=473,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.58470786 = fieldWeight in 473, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
        0.022971334 = weight(_text_:library in 473) [ClassicSimilarity], result of:
          0.022971334 = score(doc=473,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.5 = coord(2/4)
    
    Abstract
    The emergence of large multi-institutional digital libraries has opened the door to aggregate-level examinations of the published word. Such large-scale analysis offers a new way to pursue traditional problems in the humanities and social sciences, using digital methods to ask routine questions of large corpora. However, inquiry into multiple centuries of books is constrained by the burdens of scale, where statistical inference is technically complex and limited by hurdles to access and flexibility. This work examines the role that exploratory data analysis and visualization tools may play in understanding large bibliographic datasets. We present one such tool, HathiTrust+Bookworm, which allows multifaceted exploration of the multimillion work HathiTrust Digital Library, and center it in the broader space of scholarly tools for exploratory data analysis.
    Series
    JASIST special issue on digital humanities (DH): C. Methodological innovations, challenges, and new interest in DH
  11. McElfresh, L.K.: Creator name standardization using faceted vocabularies in the BTAA geoportal : Michigan State University libraries digital repository case study (2023) 0.07
    0.06706676 = product of:
      0.13413352 = sum of:
        0.085297674 = weight(_text_:digital in 1178) [ClassicSimilarity], result of:
          0.085297674 = score(doc=1178,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 1178, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.048835836 = product of:
          0.09767167 = sum of:
            0.09767167 = weight(_text_:project in 1178) [ClassicSimilarity], result of:
              0.09767167 = score(doc=1178,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.4616698 = fieldWeight in 1178, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Digital libraries incorporate metadata from varied sources, ranging from traditional catalog data to author-supplied descriptions. The Big Ten Academic Alliance (BTAA) Geoportal unites geospatial resources from the libraries of the BTAA, compounding the variability of metadata. The BTAA Geospatial Information Network's (BTAA GIN) Metadata Committee works to ensure completeness and consistency of metadata in the Geoportal, including a project to standardize the contents of the Creator field. The project comprises an OpenRefine data cleaning phase; evaluation of controlled vocabularies for semiautomated matching via OpenRefine reconciliation; and development and testing of a best practices guide for application of a controlled vocabulary.
  12. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.07
    0.06534804 = product of:
      0.08713072 = sum of:
        0.043081827 = weight(_text_:digital in 950) [ClassicSimilarity], result of:
          0.043081827 = score(doc=950,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.21790776 = fieldWeight in 950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.027071979 = weight(_text_:library in 950) [ClassicSimilarity], result of:
          0.027071979 = score(doc=950,freq=4.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.2054202 = fieldWeight in 950, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.033953834 = score(doc=950,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  13. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.06
    0.06394527 = product of:
      0.12789054 = sum of:
        0.11430901 = weight(_text_:digital in 1003) [ClassicSimilarity], result of:
          0.11430901 = score(doc=1003,freq=22.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.57817465 = fieldWeight in 1003, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=1003)
        0.013581533 = product of:
          0.027163066 = sum of:
            0.027163066 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
              0.027163066 = score(doc=1003,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.15476047 = fieldWeight in 1003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1003)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Argues for a holistic view of the digital environment in which many of us now live, as neither determined by the features of technology nor uniformly negative for society.
    Content
    1. Three Environments, One Life -- Part I: Foundations -- 2. Mediatization -- 3. Algorithms -- 4. Race and Ethnicity -- 5. Gender -- Part II: Institutions -- 6. Parenting -- 7. Schooling -- 8. Working -- 9. Dating -- Part III: Leisure -- 10. Sports -- 11. Televised Entertainment -- 12. News -- Part IV: Politics -- 13. Misinformation and Disinformation -- 14. Electoral Campaigns -- 15. Activism -- Part V: Innovations -- 16. Data Science -- 17. Virtual Reality -- 18. Space Exploration -- 19. Bricks and Cracks in the Digital Environment
    Date
    22. 6.2023 18:25:18
    LCSH
    Digital media / Social aspects
    Subject
    Digital media / Social aspects
  14. Wagner, E.: Über Impfstoffe zur digitalen Identität? (2020) 0.06
    0.060058743 = product of:
      0.120117486 = sum of:
        0.086163655 = weight(_text_:digital in 5846) [ClassicSimilarity], result of:
          0.086163655 = score(doc=5846,freq=2.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4358155 = fieldWeight in 5846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.078125 = fieldNorm(doc=5846)
        0.033953834 = product of:
          0.06790767 = sum of:
            0.06790767 = weight(_text_:22 in 5846) [ClassicSimilarity], result of:
              0.06790767 = score(doc=5846,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.38690117 = fieldWeight in 5846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5846)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Die u.a. von Bill Gates, Microsoft, Accenture und der Rockefeller Foundation finanzierte "Digital Identity Alliance" will digitale Impfnachweise mit einer globalen biometrischen digitalen Identität verbinden, die auf Lebenszeit besteht.
    Date
    4. 5.2020 17:22:40
  15. Yip, J.C.; Lee, K.J.; Lee, J.H.: Design partnerships for participatory librarianship : a conceptual model for understanding librarians co designing with digital youth (2020) 0.06
    0.05773834 = product of:
      0.11547668 = sum of:
        0.0963339 = weight(_text_:digital in 5967) [ClassicSimilarity], result of:
          0.0963339 = score(doc=5967,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4872566 = fieldWeight in 5967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5967)
        0.01914278 = weight(_text_:library in 5967) [ClassicSimilarity], result of:
          0.01914278 = score(doc=5967,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.14525402 = fieldWeight in 5967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5967)
      0.5 = coord(2/4)
    
    Abstract
    Libraries play a central role for youth and digital learning. As libraries transition to learning spaces, youth librarians can engage in aspects of democratic design that empowers youth. Participatory design (PD) is a user-centered design method that can support librarians in the democratic development of digital learning spaces. However, while PD has been used in libraries, we have little knowledge of how youth librarians can act as codesign partners. We need a conceptual model to understand the role of youth librarians in codesign, and how their experiences are integrated into youth design partnerships. To generate this model, we examine a case study of the evolutionary process of a single librarian and the development of a library system's learning activities through PD. Using the idea of equal design partnerships, we analyzed video recordings and stakeholder interviews on how children (ages 7-11) worked together with a librarian to develop new digital learning activities. Our discussion focuses on the development of a participatory librarian design conceptual model that situates librarians as design partners with youth. The article concludes with recommendations for integrating PD methods into libraries to create digital learning spaces and suggestions for moving forward with this design perspective.
  16. Isaac, A.; Raemy, J.A.; Meijers, E.; Valk, S. De; Freire, N.: Metadata aggregation via linked data : results of the Europeana Common Culture project (2020) 0.06
    0.05748579 = product of:
      0.11497158 = sum of:
        0.073112294 = weight(_text_:digital in 39) [ClassicSimilarity], result of:
          0.073112294 = score(doc=39,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.36980176 = fieldWeight in 39, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=39)
        0.041859288 = product of:
          0.083718576 = sum of:
            0.083718576 = weight(_text_:project in 39) [ClassicSimilarity], result of:
              0.083718576 = score(doc=39,freq=4.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.39571697 = fieldWeight in 39, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.046875 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Digital cultural heritage resources are widely available on the web through the digital libraries of heritage institutions. To address the difficulties of discoverability in cultural heritage, the common practice is metadata aggregation, where centralized efforts like Europeana facilitate discoverability by collecting the resources' metadata. We present the results of the linked data aggregation task conducted within the Europeana Common Culture project, which attempted an innovative approach to aggregation based on linked data made available by cultural heritage institutions. This task ran for one year with participation of eleven organizations, involving the three member roles of the Europeana network: data providers, intermediary aggregators, and the central aggregation hub, Europeana. We report on the challenges that were faced by data providers, the standards and specifications applied, and the resulting aggregated metadata.
  17. Gartner, R.: Metadata in the digital library : building an integrated strategy with XML (2021) 0.06
    0.056932893 = product of:
      0.113865785 = sum of:
        0.08573176 = weight(_text_:digital in 732) [ClassicSimilarity], result of:
          0.08573176 = score(doc=732,freq=22.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.433631 = fieldWeight in 732, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
        0.028134026 = weight(_text_:library in 732) [ClassicSimilarity], result of:
          0.028134026 = score(doc=732,freq=12.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.21347894 = fieldWeight in 732, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
      0.5 = coord(2/4)
    
    Abstract
    This book provides a practical introduction to metadata for the digital library, describing in detail how to implement a strategic approach which will enable complex digital objects to be discovered, delivered and preserved in the short- and long-term.
    The range of metadata needed to run a digital library and preserve its collections in the long term is much more extensive and complicated than anything in its traditional counterpart. It includes the same 'descriptive' information which guides users to the resources they require but must supplement this with comprehensive 'administrative' metadata: this encompasses technical details of the files that make up its collections, the documentation of complex intellectual property rights and the extensive set needed to support its preservation in the long-term. To accommodate all of this requires the use of multiple metadata standards, all of which have to be brought together into a single integrated whole.
    Metadata in the Digital Library is a complete guide to building a digital library metadata strategy from scratch, using established metadata standards bound together by the markup language XML. The book introduces the reader to the theory of metadata and shows how it can be applied in practice. It lays out the basic principles that should underlie any metadata strategy, including its relation to such fundamentals as the digital curation lifecycle, and demonstrates how they should be put into effect. It introduces the XML language and the key standards for each type of metadata, including Dublin Core and MODS for descriptive metadata and PREMIS for its administrative and preservation counterpart. Finally, the book shows how these can all be integrated using the packaging standard METS. Two case studies from the Warburg Institute in London show how the strategy can be implemented in a working environment. The strategy laid out in this book will ensure that a digital library's metadata will support all of its operations, be fully interoperable with others and enable its long-term preservation. It assumes no prior knowledge of metadata, XML or any of the standards that it covers. It provides both an introduction to best practices in digital library metadata and a manual for their practical implementation.
    Content
    Inhalt: 1 Introduction, Aims and Definitions -- 1.1 Origins -- 1.2 From information science to libraries -- 1.3 The central place of metadata -- 1.4 The book in outline -- 2 Metadata Basics -- 2.1 Introduction -- 2.2 Three types of metadata -- 2.2.1 Descriptive metadata -- 2.2.2 Administrative metadata -- 2.2.3 Structural metadata -- 2.3 The core components of metadata -- 2.3.1 Syntax -- 2.3.2 Semantics -- 2.3.3 Content rules -- 2.4 Metadata standards -- 2.5 Conclusion -- 3 Planning a Metadata Strategy: Basic Principles -- 3.1 Introduction -- 3.2 Principle 1: Support all stages of the digital curation lifecycle -- 3.3 Principle 2: Support the long-term preservation of the digital object -- 3.4 Principle 3: Ensure interoperability -- 3.5 Principle 4: Control metadata content wherever possible -- 3.6 Principle 5: Ensure software independence -- 3.7 Principle 6: Impose a logical system of identifiers -- 3.8 Principle 7: Use standards whenever possible -- 3.9 Principle 8: Ensure the integrity of the metadata itself -- 3.10 Summary: the basic principles of a metadata strategy -- 4 Planning a Metadata Strategy: Applying the Basic Principles -- 4.1 Introduction -- 4.2 Initial steps: standards as a foundation -- 4.2.1 'Off-the shelf' standards -- 4.2.2 Mapping out an architecture and serialising it into a standard -- 4.2.3 Devising a local metadata scheme -- 4.2.4 How standards support the basic principles -- 4.3 Identifiers: everything in its place -- 5 XML: The Syntactical Foundation of Metadata -- 5.1 Introduction -- 5.2 What XML looks like -- 5.3 XML schemas -- 5.4 Namespaces -- 5.5 Creating and editing XML -- 5.6 Transforming XML -- 5.7 Why use XML? -- 6 METS: The Metadata Package -- 6.1 Introduction -- 6.2 Why use METS?.
  18. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.06
    0.056655407 = product of:
      0.113310814 = sum of:
        0.0963339 = weight(_text_:digital in 5757) [ClassicSimilarity], result of:
          0.0963339 = score(doc=5757,freq=10.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4872566 = fieldWeight in 5757, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5757)
        0.016976917 = product of:
          0.033953834 = sum of:
            0.033953834 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.033953834 = score(doc=5757,freq=2.0), product of:
                0.17551683 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050121464 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  19. Tharani, K.: Just KOS! : enriching digital collections with hypertexts to enhance accessibility of non-western knowledge materials in libraries (2020) 0.06
    0.056257617 = product of:
      0.11251523 = sum of:
        0.0895439 = weight(_text_:digital in 5896) [ClassicSimilarity], result of:
          0.0895439 = score(doc=5896,freq=6.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.4529128 = fieldWeight in 5896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
        0.022971334 = weight(_text_:library in 5896) [ClassicSimilarity], result of:
          0.022971334 = score(doc=5896,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.17430481 = fieldWeight in 5896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=5896)
      0.5 = coord(2/4)
    
    Abstract
    The knowledge organization systems (KOS) in use at libraries are social constructs that were conceived in the Euro-American context to organize and retrieve Western knowledge materials. As social constructs of the West, the effectiveness of library KOSs is limited when it comes to organization and retrieval of non-Western knowledge materials. How can librarians respond if asked to make non-Western knowledge materials as accessible as Western materials in their libraries? The accessibility of Western and non-Western knowledge materials in libraries need not be an either-or proposition. By way of a case study, a practical way forward is presented by which librarians can use their professional agency and existing digital technologies to exercise social justice. More specifically I demonstrate the design and development of a specialized KOS that enriches digital collections with hypertext features to enhance the accessibility of non-Western knowledge materials in libraries.
  20. Nicholson, J.; Lake, S.: Implementation of FAST in two digital repositories : breaking silos, unifying subject practices (2023) 0.06
    0.05604878 = product of:
      0.11209756 = sum of:
        0.085297674 = weight(_text_:digital in 1174) [ClassicSimilarity], result of:
          0.085297674 = score(doc=1174,freq=4.0), product of:
            0.19770671 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.050121464 = queryNorm
            0.43143538 = fieldWeight in 1174, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1174)
        0.026799891 = weight(_text_:library in 1174) [ClassicSimilarity], result of:
          0.026799891 = score(doc=1174,freq=2.0), product of:
            0.1317883 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.050121464 = queryNorm
            0.20335563 = fieldWeight in 1174, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1174)
      0.5 = coord(2/4)
    
    Abstract
    This study traces evolving approaches to the use of the FAST (Faceted Application of Subject Terminology) in digital repositories at Atkins Library at the University of North Carolina at Charlotte, where changes in staffing, the launch of an institutional repository, and efforts to address problematic language in metadata have necessitated a shift from an in-depth indexing approach to FAST to a lightweight "tagging" model more suited to present-day metadata needs. Despite the subject schema's apparent simplicity, Atkins' experience with FAST has shown that it still requires significant time, effort, and experimentation in order to deploy it to best effect.

Languages

  • e 346
  • d 55
  • m 1
  • pt 1
  • More… Less…

Types

  • a 371
  • el 58
  • m 16
  • p 8
  • s 3
  • x 1
  • More… Less…

Subjects