Search (419 results, page 1 of 21)

  • × year_i:[2020 TO 2030}
  1. Lynch, J.D.; Gibson, J.; Han, M.-J.: Analyzing and normalizing type metadata for a large aggregated digital library (2020) 0.17
    0.16815662 = product of:
      0.22420883 = sum of:
        0.12250038 = weight(_text_:digital in 5720) [ClassicSimilarity], result of:
          0.12250038 = score(doc=5720,freq=6.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.5283983 = fieldWeight in 5720, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
        0.04444293 = weight(_text_:library in 5720) [ClassicSimilarity], result of:
          0.04444293 = score(doc=5720,freq=4.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.28758827 = fieldWeight in 5720, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5720)
        0.057265528 = product of:
          0.114531055 = sum of:
            0.114531055 = weight(_text_:project in 5720) [ClassicSimilarity], result of:
              0.114531055 = score(doc=5720,freq=4.0), product of:
                0.24808002 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.05877307 = queryNorm
                0.4616698 = fieldWeight in 5720, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5720)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The Illinois Digital Heritage Hub (IDHH) gathers and enhances metadata from contributing institutions around the state of Illinois and provides this metadata to th Digital Public Library of America (DPLA) for greater access. The IDHH helps contributors shape their metadata to the standards recommended and required by the DPLA in part by analyzing and enhancing aggregated metadata. In late 2018, the IDHH undertook a project to address a particularly problematic field, Type metadata. This paper walks through the project, detailing the process of gathering and analyzing metadata using the DPLA API and OpenRefine, data remediation through XSL transformations in conjunction with local improvements by contributing institutions, and the DPLA ingestion system's quality controls.
  2. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.16
    0.15878843 = product of:
      0.2117179 = sum of:
        0.10002114 = weight(_text_:digital in 1139) [ClassicSimilarity], result of:
          0.10002114 = score(doc=1139,freq=4.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.43143538 = fieldWeight in 1139, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
        0.054431252 = weight(_text_:library in 1139) [ClassicSimilarity], result of:
          0.054431252 = score(doc=1139,freq=6.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.3522223 = fieldWeight in 1139, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1139)
        0.057265528 = product of:
          0.114531055 = sum of:
            0.114531055 = weight(_text_:project in 1139) [ClassicSimilarity], result of:
              0.114531055 = score(doc=1139,freq=4.0), product of:
                0.24808002 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.05877307 = queryNorm
                0.4616698 = fieldWeight in 1139, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  3. Post, C.; Henry, T.; Nunnally, K.; Lanham, C.: ¬A colossal catalog adventure : representing Indie video games and game creators in library catalogs (2023) 0.14
    0.13871768 = product of:
      0.18495691 = sum of:
        0.10002114 = weight(_text_:digital in 1182) [ClassicSimilarity], result of:
          0.10002114 = score(doc=1182,freq=4.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.43143538 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1182)
        0.04444293 = weight(_text_:library in 1182) [ClassicSimilarity], result of:
          0.04444293 = score(doc=1182,freq=4.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.28758827 = fieldWeight in 1182, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1182)
        0.04049284 = product of:
          0.08098568 = sum of:
            0.08098568 = weight(_text_:project in 1182) [ClassicSimilarity], result of:
              0.08098568 = score(doc=1182,freq=2.0), product of:
                0.24808002 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.05877307 = queryNorm
                0.32644984 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1182)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Significant changes in how video games are made and distributed require catalogers to critically reflect on existing approaches for representing games in library catalogs. Digital distribution channels are quickly supplanting releases of games on physical media while also facilitating a dramatic increase in independent-made games that incorporate novel subject matter and styles of gameplay. This paper presents an action research project cataloging 18 independently-made digital games from a small publisher, Choice of Games, considering how descriptive cataloging, subject cataloging, and name authority control for these works compares to mainstream video games.
  4. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.13
    0.13208078 = product of:
      0.1761077 = sum of:
        0.043237977 = product of:
          0.12971392 = sum of:
            0.12971392 = weight(_text_:objects in 5757) [ClassicSimilarity], result of:
              0.12971392 = score(doc=5757,freq=4.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.41523993 = fieldWeight in 5757, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.33333334 = coord(1/3)
        0.112962365 = weight(_text_:digital in 5757) [ClassicSimilarity], result of:
          0.112962365 = score(doc=5757,freq=10.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.4872566 = fieldWeight in 5757, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5757)
        0.01990735 = product of:
          0.0398147 = sum of:
            0.0398147 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.0398147 = score(doc=5757,freq=2.0), product of:
                0.20581327 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05877307 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  5. Oberbichler, S.; Boros, E.; Doucet, A.; Marjanen, J.; Pfanzelter, E.; Rautiainen, J.; Toivonen, H.; Tolonen, M.: Integrated interdisciplinary workflows for research on historical newspapers : perspectives from humanities scholars, computer scientists, and librarians (2022) 0.12
    0.123249665 = product of:
      0.16433288 = sum of:
        0.112962365 = weight(_text_:digital in 465) [ClassicSimilarity], result of:
          0.112962365 = score(doc=465,freq=10.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.4872566 = fieldWeight in 465, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=465)
        0.022447068 = weight(_text_:library in 465) [ClassicSimilarity], result of:
          0.022447068 = score(doc=465,freq=2.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.14525402 = fieldWeight in 465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=465)
        0.028923457 = product of:
          0.057846915 = sum of:
            0.057846915 = weight(_text_:project in 465) [ClassicSimilarity], result of:
              0.057846915 = score(doc=465,freq=2.0), product of:
                0.24808002 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.05877307 = queryNorm
                0.23317845 = fieldWeight in 465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=465)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This article considers the interdisciplinary opportunities and challenges of working with digital cultural heritage, such as digitized historical newspapers, and proposes an integrated digital hermeneutics workflow to combine purely disciplinary research approaches from computer science, humanities, and library work. Common interests and motivations of the above-mentioned disciplines have resulted in interdisciplinary projects and collaborations such as the NewsEye project, which is working on novel solutions on how digital heritage data is (re)searched, accessed, used, and analyzed. We argue that collaborations of different disciplines can benefit from a good understanding of the workflows and traditions of each of the disciplines involved but must find integrated approaches to successfully exploit the full potential of digitized sources. The paper is furthermore providing an insight into digital tools, methods, and hermeneutics in action, showing that integrated interdisciplinary research needs to build something in between the disciplines while respecting and understanding each other's expertise and expectations.
    Series
    JASIST special issue on digital humanities (DH): B. Infrastructures of DH
  6. Kord, A.: Evaluating metadata quality in LGBTQ+ digital community archives (2022) 0.11
    0.11414149 = product of:
      0.22828297 = sum of:
        0.1581473 = weight(_text_:digital in 1140) [ClassicSimilarity], result of:
          0.1581473 = score(doc=1140,freq=10.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.6821592 = fieldWeight in 1140, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1140)
        0.07013567 = product of:
          0.14027134 = sum of:
            0.14027134 = weight(_text_:project in 1140) [ClassicSimilarity], result of:
              0.14027134 = score(doc=1140,freq=6.0), product of:
                0.24808002 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.05877307 = queryNorm
                0.5654278 = fieldWeight in 1140, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1140)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This project evaluated metadata in digital LGBTQ+ community archives in order to determine its quality and how metadata quality effects the sustainability of digital community archives. This project uses a case study approach, using content analysis to evaluate metadata quality of three LGBTQ+ digital archives: Transas City, The History Project, and ONE Archives. Analysis found that the metadata in LGBTQ+ digital community archives is inconsistent and often only meets the minimum requirements for quality metadata. Further, this study concluded that professional guidelines and practices for metadata strip away the personality and uniqueness that is key to community archives success and purpose.
  7. Gartner, R.: Metadata in the digital library : building an integrated strategy with XML (2021) 0.11
    0.1138986 = product of:
      0.1518648 = sum of:
        0.018344318 = product of:
          0.055032954 = sum of:
            0.055032954 = weight(_text_:objects in 732) [ClassicSimilarity], result of:
              0.055032954 = score(doc=732,freq=2.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.17617138 = fieldWeight in 732, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=732)
          0.33333334 = coord(1/3)
        0.10053016 = weight(_text_:digital in 732) [ClassicSimilarity], result of:
          0.10053016 = score(doc=732,freq=22.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.433631 = fieldWeight in 732, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
        0.03299032 = weight(_text_:library in 732) [ClassicSimilarity], result of:
          0.03299032 = score(doc=732,freq=12.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.21347894 = fieldWeight in 732, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0234375 = fieldNorm(doc=732)
      0.75 = coord(3/4)
    
    Abstract
    This book provides a practical introduction to metadata for the digital library, describing in detail how to implement a strategic approach which will enable complex digital objects to be discovered, delivered and preserved in the short- and long-term.
    The range of metadata needed to run a digital library and preserve its collections in the long term is much more extensive and complicated than anything in its traditional counterpart. It includes the same 'descriptive' information which guides users to the resources they require but must supplement this with comprehensive 'administrative' metadata: this encompasses technical details of the files that make up its collections, the documentation of complex intellectual property rights and the extensive set needed to support its preservation in the long-term. To accommodate all of this requires the use of multiple metadata standards, all of which have to be brought together into a single integrated whole.
    Metadata in the Digital Library is a complete guide to building a digital library metadata strategy from scratch, using established metadata standards bound together by the markup language XML. The book introduces the reader to the theory of metadata and shows how it can be applied in practice. It lays out the basic principles that should underlie any metadata strategy, including its relation to such fundamentals as the digital curation lifecycle, and demonstrates how they should be put into effect. It introduces the XML language and the key standards for each type of metadata, including Dublin Core and MODS for descriptive metadata and PREMIS for its administrative and preservation counterpart. Finally, the book shows how these can all be integrated using the packaging standard METS. Two case studies from the Warburg Institute in London show how the strategy can be implemented in a working environment. The strategy laid out in this book will ensure that a digital library's metadata will support all of its operations, be fully interoperable with others and enable its long-term preservation. It assumes no prior knowledge of metadata, XML or any of the standards that it covers. It provides both an introduction to best practices in digital library metadata and a manual for their practical implementation.
    Content
    Inhalt: 1 Introduction, Aims and Definitions -- 1.1 Origins -- 1.2 From information science to libraries -- 1.3 The central place of metadata -- 1.4 The book in outline -- 2 Metadata Basics -- 2.1 Introduction -- 2.2 Three types of metadata -- 2.2.1 Descriptive metadata -- 2.2.2 Administrative metadata -- 2.2.3 Structural metadata -- 2.3 The core components of metadata -- 2.3.1 Syntax -- 2.3.2 Semantics -- 2.3.3 Content rules -- 2.4 Metadata standards -- 2.5 Conclusion -- 3 Planning a Metadata Strategy: Basic Principles -- 3.1 Introduction -- 3.2 Principle 1: Support all stages of the digital curation lifecycle -- 3.3 Principle 2: Support the long-term preservation of the digital object -- 3.4 Principle 3: Ensure interoperability -- 3.5 Principle 4: Control metadata content wherever possible -- 3.6 Principle 5: Ensure software independence -- 3.7 Principle 6: Impose a logical system of identifiers -- 3.8 Principle 7: Use standards whenever possible -- 3.9 Principle 8: Ensure the integrity of the metadata itself -- 3.10 Summary: the basic principles of a metadata strategy -- 4 Planning a Metadata Strategy: Applying the Basic Principles -- 4.1 Introduction -- 4.2 Initial steps: standards as a foundation -- 4.2.1 'Off-the shelf' standards -- 4.2.2 Mapping out an architecture and serialising it into a standard -- 4.2.3 Devising a local metadata scheme -- 4.2.4 How standards support the basic principles -- 4.3 Identifiers: everything in its place -- 5 XML: The Syntactical Foundation of Metadata -- 5.1 Introduction -- 5.2 What XML looks like -- 5.3 XML schemas -- 5.4 Namespaces -- 5.5 Creating and editing XML -- 5.6 Transforming XML -- 5.7 Why use XML? -- 6 METS: The Metadata Package -- 6.1 Introduction -- 6.2 Why use METS?.
  8. Acker, A.: Emulation practices for software preservation in libraries, archives, and museums (2021) 0.11
    0.10539091 = product of:
      0.14052121 = sum of:
        0.030573865 = product of:
          0.091721594 = sum of:
            0.091721594 = weight(_text_:objects in 334) [ClassicSimilarity], result of:
              0.091721594 = score(doc=334,freq=2.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.29361898 = fieldWeight in 334, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=334)
          0.33333334 = coord(1/3)
        0.087500274 = weight(_text_:digital in 334) [ClassicSimilarity], result of:
          0.087500274 = score(doc=334,freq=6.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.37742734 = fieldWeight in 334, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=334)
        0.022447068 = weight(_text_:library in 334) [ClassicSimilarity], result of:
          0.022447068 = score(doc=334,freq=2.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.14525402 = fieldWeight in 334, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=334)
      0.75 = coord(3/4)
    
    Abstract
    Emulation practices are computational, technical processes that allow for one system to reproduce the functions and results of another. This article reports on findings from research following three small teams of information professionals as they implemented emulation practices into their digital preservation programs at a technology museum, a university research library, and a university research archive and technology lab. Results suggest that the distributed teams in this cohort of preservationists have developed different emulation practices for particular kinds of "emulation encounters" in supporting different types of access. I discuss the implications of these findings for digital preservation research and emulation initiatives providing access to software or software-dependent objects, showing how implications of these findings have significance for those developing software preservation workflows and building emulation capacities. These findings suggest that different emulation practices for preservation, research access, and exhibition undertaken in libraries, archives, and museums result in different forms of access to preserved software-accessing information and experiential access. In examining particular types of access, this research calls into question software emulation as a single, static preservation strategy for information institutions and challenges researchers to examine new forms of access and descriptive representation emerging from these digital preservation strategies.
  9. Siqueira, J.; Martins, D.L.: Workflow models for aggregating cultural heritage data on the web : a systematic literature review (2022) 0.11
    0.10539091 = product of:
      0.14052121 = sum of:
        0.030573865 = product of:
          0.091721594 = sum of:
            0.091721594 = weight(_text_:objects in 464) [ClassicSimilarity], result of:
              0.091721594 = score(doc=464,freq=2.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.29361898 = fieldWeight in 464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=464)
          0.33333334 = coord(1/3)
        0.087500274 = weight(_text_:digital in 464) [ClassicSimilarity], result of:
          0.087500274 = score(doc=464,freq=6.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.37742734 = fieldWeight in 464, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=464)
        0.022447068 = weight(_text_:library in 464) [ClassicSimilarity], result of:
          0.022447068 = score(doc=464,freq=2.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.14525402 = fieldWeight in 464, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=464)
      0.75 = coord(3/4)
    
    Abstract
    In recent years, different cultural institutions have made efforts to spread culture through the construction of a unique search interface that integrates their digital objects and facilitates data retrieval for lay users. However, integrating cultural data is not a trivial task; therefore, this work performs a systematic literature review on data aggregation workflows, in order to answer five questions: What are the projects? What are the planned steps? Which technologies are used? Are the steps performed manually, automatically, or semi-automatically? Which perform semantic search? The searches were carried out in three databases: Networked Digital Library of Theses and Dissertations, Scopus and Web of Science. In Q01, 12 projects were selected. In Q02, 9 stages were identified: Harvesting, Ingestion, Mapping, Indexing, Storing, Monitoring, Enriching, Displaying, and Publishing LOD. In Q03, 19 different technologies were found it. In Q04, we identified that most of the solutions are semi-automatic and, in Q05, that most of them perform a semantic search. The analysis of the workflows allowed us to identify that there is no consensus regarding the stages, their nomenclatures, and technologies, besides presenting superficial discussions. But it allowed to identify the main steps for the implementation of the aggregation of cultural data.
    Series
    JASIST special issue on digital humanities (DH): B. Infrastructures of DH
  10. Koster, L.: Persistent identifiers for heritage objects (2020) 0.10
    0.10058483 = product of:
      0.1341131 = sum of:
        0.06114773 = product of:
          0.18344319 = sum of:
            0.18344319 = weight(_text_:objects in 5718) [ClassicSimilarity], result of:
              0.18344319 = score(doc=5718,freq=8.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.58723795 = fieldWeight in 5718, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.33333334 = coord(1/3)
        0.0505183 = weight(_text_:digital in 5718) [ClassicSimilarity], result of:
          0.0505183 = score(doc=5718,freq=2.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.21790776 = fieldWeight in 5718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
        0.022447068 = weight(_text_:library in 5718) [ClassicSimilarity], result of:
          0.022447068 = score(doc=5718,freq=2.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.14525402 = fieldWeight in 5718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5718)
      0.75 = coord(3/4)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.
  11. Morris, V.: Automated language identification of bibliographic resources (2020) 0.10
    0.09608695 = product of:
      0.1921739 = sum of:
        0.03591531 = weight(_text_:library in 5749) [ClassicSimilarity], result of:
          0.03591531 = score(doc=5749,freq=2.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.23240642 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.15625858 = sum of:
          0.09255506 = weight(_text_:project in 5749) [ClassicSimilarity], result of:
            0.09255506 = score(doc=5749,freq=2.0), product of:
              0.24808002 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.05877307 = queryNorm
              0.37308553 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
          0.063703515 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
            0.063703515 = score(doc=5749,freq=2.0), product of:
              0.20581327 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05877307 = queryNorm
              0.30952093 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
      0.5 = coord(2/4)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  12. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.09
    0.09232198 = product of:
      0.12309597 = sum of:
        0.07144367 = weight(_text_:digital in 5617) [ClassicSimilarity], result of:
          0.07144367 = score(doc=5617,freq=4.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.3081681 = fieldWeight in 5617, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.03174495 = weight(_text_:library in 5617) [ClassicSimilarity], result of:
          0.03174495 = score(doc=5617,freq=4.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.2054202 = fieldWeight in 5617, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.01990735 = product of:
          0.0398147 = sum of:
            0.0398147 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
              0.0398147 = score(doc=5617,freq=2.0), product of:
                0.20581327 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05877307 = queryNorm
                0.19345059 = fieldWeight in 5617, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5617)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
  13. Babcock, K.; Lee, S.; Rajakumar, J.; Wagner, A.: Providing access to digital collections (2020) 0.09
    0.08844842 = product of:
      0.17689684 = sum of:
        0.043237977 = product of:
          0.12971392 = sum of:
            0.12971392 = weight(_text_:objects in 5855) [ClassicSimilarity], result of:
              0.12971392 = score(doc=5855,freq=4.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.41523993 = fieldWeight in 5855, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5855)
          0.33333334 = coord(1/3)
        0.13365887 = weight(_text_:digital in 5855) [ClassicSimilarity], result of:
          0.13365887 = score(doc=5855,freq=14.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.57652974 = fieldWeight in 5855, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5855)
      0.5 = coord(2/4)
    
    Abstract
    The University of Toronto Libraries is currently reviewing technology to support its Collections U of T service. Collections U of T provides search and browse access to 375 digital collections (and over 203,000 digital objects) at the University of Toronto Libraries. Digital objects typically include special collections material from the university as well as faculty digital collections, all with unique metadata requirements. The service is currently supported by IIIF-enabled Islandora, with one Fedora back end and multiple Drupal sites per parent collection (see attached image). Like many institutions making use of Islandora, UTL is now confronted with Drupal 7 end of life and has begun to investigate a migration path forward. This article will summarise the Collections U of T functional requirements and lessons learned from our current technology stack. It will go on to outline our research to date for alternate solutions. The article will review both emerging micro-service solutions, as well as out-of-the-box platforms, to provide an overview of the digital collection technology landscape in 2019. Note that our research is focused on reviewing technology solutions for providing access to digital collections, as preservation services are offered through other services at the University of Toronto Libraries.
  14. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.09
    0.085533455 = product of:
      0.17106691 = sum of:
        0.053872965 = weight(_text_:library in 5996) [ClassicSimilarity], result of:
          0.053872965 = score(doc=5996,freq=8.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.34860963 = fieldWeight in 5996, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.11719394 = sum of:
          0.0694163 = weight(_text_:project in 5996) [ClassicSimilarity], result of:
            0.0694163 = score(doc=5996,freq=2.0), product of:
              0.24808002 = queryWeight, product of:
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.05877307 = queryNorm
              0.27981415 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.220981 = idf(docFreq=1764, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
          0.047777634 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
            0.047777634 = score(doc=5996,freq=2.0), product of:
              0.20581327 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05877307 = queryNorm
              0.23214069 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
      0.5 = coord(2/4)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  15. Hoeber, O.: ¬A study of visually linked keywords to support exploratory browsing in academic search (2022) 0.08
    0.08394965 = product of:
      0.1678993 = sum of:
        0.12124394 = weight(_text_:digital in 644) [ClassicSimilarity], result of:
          0.12124394 = score(doc=644,freq=8.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.52297866 = fieldWeight in 644, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=644)
        0.04665536 = weight(_text_:library in 644) [ClassicSimilarity], result of:
          0.04665536 = score(doc=644,freq=6.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.30190483 = fieldWeight in 644, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=644)
      0.5 = coord(2/4)
    
    Abstract
    While the search interfaces used by common academic digital libraries provide easy access to a wealth of peer-reviewed literature, their interfaces provide little support for exploratory browsing. When faced with a complex search task (such as one that requires knowledge discovery), exploratory browsing is an important first step in an exploratory search process. To more effectively support exploratory browsing, we have designed and implemented a novel academic digital library search interface (KLink Search) with two new features: visually linked keywords and an interactive workspace. To study the potential value of these features, we have conducted a controlled laboratory study with 32 participants, comparing KLink Search to a baseline digital library search interface modeled after that used by IEEE Xplore. Based on subjective opinions, objective performance, and behavioral data, we show the value of adding lightweight visual and interactive features to academic digital library search interfaces to support exploratory browsing.
  16. Organisciak, P.; Schmidt, B.M.; Downie, J.S.: Giving shape to large digital libraries through exploratory data analysis (2022) 0.08
    0.08124565 = product of:
      0.1624913 = sum of:
        0.13555482 = weight(_text_:digital in 473) [ClassicSimilarity], result of:
          0.13555482 = score(doc=473,freq=10.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.58470786 = fieldWeight in 473, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
        0.026936483 = weight(_text_:library in 473) [ClassicSimilarity], result of:
          0.026936483 = score(doc=473,freq=2.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.17430481 = fieldWeight in 473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.5 = coord(2/4)
    
    Abstract
    The emergence of large multi-institutional digital libraries has opened the door to aggregate-level examinations of the published word. Such large-scale analysis offers a new way to pursue traditional problems in the humanities and social sciences, using digital methods to ask routine questions of large corpora. However, inquiry into multiple centuries of books is constrained by the burdens of scale, where statistical inference is technically complex and limited by hurdles to access and flexibility. This work examines the role that exploratory data analysis and visualization tools may play in understanding large bibliographic datasets. We present one such tool, HathiTrust+Bookworm, which allows multifaceted exploration of the multimillion work HathiTrust Digital Library, and center it in the broader space of scholarly tools for exploratory data analysis.
    Series
    JASIST special issue on digital humanities (DH): C. Methodological innovations, challenges, and new interest in DH
  17. McElfresh, L.K.: Creator name standardization using faceted vocabularies in the BTAA geoportal : Michigan State University libraries digital repository case study (2023) 0.08
    0.07864334 = product of:
      0.15728667 = sum of:
        0.10002114 = weight(_text_:digital in 1178) [ClassicSimilarity], result of:
          0.10002114 = score(doc=1178,freq=4.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.43143538 = fieldWeight in 1178, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.057265528 = product of:
          0.114531055 = sum of:
            0.114531055 = weight(_text_:project in 1178) [ClassicSimilarity], result of:
              0.114531055 = score(doc=1178,freq=4.0), product of:
                0.24808002 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.05877307 = queryNorm
                0.4616698 = fieldWeight in 1178, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Digital libraries incorporate metadata from varied sources, ranging from traditional catalog data to author-supplied descriptions. The Big Ten Academic Alliance (BTAA) Geoportal unites geospatial resources from the libraries of the BTAA, compounding the variability of metadata. The BTAA Geospatial Information Network's (BTAA GIN) Metadata Committee works to ensure completeness and consistency of metadata in the Geoportal, including a project to standardize the contents of the Creator field. The project comprises an OpenRefine data cleaning phase; evaluation of controlled vocabularies for semiautomated matching via OpenRefine reconciliation; and development and testing of a best practices guide for application of a controlled vocabulary.
  18. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.08
    0.07662795 = product of:
      0.102170594 = sum of:
        0.0505183 = weight(_text_:digital in 950) [ClassicSimilarity], result of:
          0.0505183 = score(doc=950,freq=2.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.21790776 = fieldWeight in 950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.03174495 = weight(_text_:library in 950) [ClassicSimilarity], result of:
          0.03174495 = score(doc=950,freq=4.0), product of:
            0.15453665 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.05877307 = queryNorm
            0.2054202 = fieldWeight in 950, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.01990735 = product of:
          0.0398147 = sum of:
            0.0398147 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.0398147 = score(doc=950,freq=2.0), product of:
                0.20581327 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05877307 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  19. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.07
    0.074983045 = product of:
      0.14996609 = sum of:
        0.1340402 = weight(_text_:digital in 1003) [ClassicSimilarity], result of:
          0.1340402 = score(doc=1003,freq=22.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.57817465 = fieldWeight in 1003, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.03125 = fieldNorm(doc=1003)
        0.015925879 = product of:
          0.031851757 = sum of:
            0.031851757 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
              0.031851757 = score(doc=1003,freq=2.0), product of:
                0.20581327 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05877307 = queryNorm
                0.15476047 = fieldWeight in 1003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1003)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Argues for a holistic view of the digital environment in which many of us now live, as neither determined by the features of technology nor uniformly negative for society.
    Content
    1. Three Environments, One Life -- Part I: Foundations -- 2. Mediatization -- 3. Algorithms -- 4. Race and Ethnicity -- 5. Gender -- Part II: Institutions -- 6. Parenting -- 7. Schooling -- 8. Working -- 9. Dating -- Part III: Leisure -- 10. Sports -- 11. Televised Entertainment -- 12. News -- Part IV: Politics -- 13. Misinformation and Disinformation -- 14. Electoral Campaigns -- 15. Activism -- Part V: Innovations -- 16. Data Science -- 17. Virtual Reality -- 18. Space Exploration -- 19. Bricks and Cracks in the Digital Environment
    Date
    22. 6.2023 18:25:18
    LCSH
    Digital media / Social aspects
    Subject
    Digital media / Social aspects
  20. Lopes Martins, D.; Silva Lemos, D.L. da; Rosa de Oliveira, L.F.; Siqueira, J.; Carmo, D. do; Nunes Medeiros, V.: Information organization and representation in digital cultural heritage in Brazil : systematic mapping of information infrastructure in digital collections for data science applications (2023) 0.07
    0.07176811 = product of:
      0.14353622 = sum of:
        0.030573865 = product of:
          0.091721594 = sum of:
            0.091721594 = weight(_text_:objects in 968) [ClassicSimilarity], result of:
              0.091721594 = score(doc=968,freq=2.0), product of:
                0.31238306 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.05877307 = queryNorm
                0.29361898 = fieldWeight in 968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=968)
          0.33333334 = coord(1/3)
        0.112962365 = weight(_text_:digital in 968) [ClassicSimilarity], result of:
          0.112962365 = score(doc=968,freq=10.0), product of:
            0.23183343 = queryWeight, product of:
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.05877307 = queryNorm
            0.4872566 = fieldWeight in 968, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.944552 = idf(docFreq=2326, maxDocs=44218)
              0.0390625 = fieldNorm(doc=968)
      0.5 = coord(2/4)
    
    Abstract
    This paper focuses on data science in digital cultural heritage in Brazil, where there is a lack of systematized information and curated databases for the integrated organization of documentary knowledge. Thus, the aim was to systematically map the different forms of information organization and representation applied to objects from collections belonging to institutions affiliated with the federal government's Special Department of Culture. This diagnosis is then used to discuss the requirements of devising strategies that favor a better data science information infrastructure to reuse information on Brazil's cultural heritage. Content analysis was used to identify analytical categories and obtain a broader understanding of the documentary sources of these institutions in order to extract, analyze, and interpret the data involved. A total of 215 hyperlinks that can be considered cultural collections of the institutions studied were identified, representing 2,537,921 cultural heritage items. The results show that the online publication of Brazil's digital cultural heritage is limited in terms of technology, copyright licensing, and established information organization practices. This paper provides a conceptual and analytical view to discuss the requirements for formulating strategies aimed at building a data science information infrastructure of Brazilian digital cultural collections that serves as future projects.

Languages

  • e 362
  • d 55
  • pt 2
  • m 1
  • More… Less…

Types

  • a 388
  • el 59
  • m 16
  • p 8
  • s 3
  • x 1
  • More… Less…

Subjects