Search (293 results, page 1 of 15)

  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.14
    0.13695967 = sum of:
      0.097996555 = product of:
        0.29398966 = sum of:
          0.29398966 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.29398966 = score(doc=862,freq=2.0), product of:
              0.5230965 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.061700378 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
      0.03896311 = product of:
        0.07792622 = sum of:
          0.07792622 = weight(_text_:work in 862) [ClassicSimilarity], result of:
            0.07792622 = score(doc=862,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.3440991 = fieldWeight in 862, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Cooke, N.A.; Kitzie, V.L.: Outsiders-within-Library and Information Science : reprioritizing the marginalized in critical sociocultural work (2021) 0.09
    0.09256473 = product of:
      0.18512946 = sum of:
        0.18512946 = sum of:
          0.13497217 = weight(_text_:work in 351) [ClassicSimilarity], result of:
            0.13497217 = score(doc=351,freq=12.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.5959971 = fieldWeight in 351, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
          0.050157297 = weight(_text_:22 in 351) [ClassicSimilarity], result of:
            0.050157297 = score(doc=351,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 351, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=351)
      0.5 = coord(1/2)
    
    Abstract
    While there are calls for new paradigms within the profession, there are also existing subgenres that fit this bill if they would be fully acknowledged. This essay argues that underrepresented and otherwise marginalized scholars have already produced significant work within social, cultural, and community-oriented paradigms; social justice and advocacy; and, diversity, equity, and inclusion. This work has not been sufficiently valued or promoted. Furthermore, the surrounding structural conditions have resulted in the dismissal, violently reviewed and rejected, and erased work of underrepresented scholars, and the stigmatization and delegitimization of their work. These scholars are "outsiders-within-LIS." By identifying the outsiders-within-LIS through the frame of standpoint theories, the authors are suggesting that a new paradigm does not need to be created; rather, an existing paradigm needs to be recognized and reprioritized. This reprioritized paradigm of critical sociocultural work has and will continue to creatively enrich and expand the field and decolonize LIS curricula.
    Date
    18. 9.2021 13:22:27
  3. Dhillon, P.; Singh, M.: ¬An extended ontology model for trust evaluation using advanced hybrid ontology (2023) 0.08
    0.08202091 = sum of:
      0.05446983 = product of:
        0.16340949 = sum of:
          0.16340949 = weight(_text_:objects in 981) [ClassicSimilarity], result of:
            0.16340949 = score(doc=981,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.49828792 = fieldWeight in 981, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=981)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 981) [ClassicSimilarity], result of:
            0.05510216 = score(doc=981,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 981, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=981)
        0.5 = coord(1/2)
    
    Abstract
    In the blooming area of Internet technology, the concept of Internet-of-Things (IoT) holds a distinct position that interconnects a large number of smart objects. In the context of social IoT (SIoT), the argument of trust and reliability is evaluated in the presented work. The proposed framework is divided into two blocks, namely Verification Block (VB) and Evaluation Block (EB). VB defines various ontology-based relationships computed for the objects that reflect the security and trustworthiness of an accessed service. While, EB is used for the feedback analysis and proves to be a valuable step that computes and governs the success rate of the service. Support vector machine (SVM) is applied to categorise the trust-based evaluation. The security aspect of the proposed approach is comparatively evaluated for DDoS and malware attacks in terms of success rate, trustworthiness and execution time. The proposed secure ontology-based framework provides better performance compared with existing architectures.
  4. Serra, L.G.; Schneider, J.A.; Santarém Segundo, J.E.: Person identifiers in MARC 21 records in a semantic environment (2020) 0.08
    0.07707824 = sum of:
      0.044935312 = product of:
        0.13480593 = sum of:
          0.13480593 = weight(_text_:objects in 127) [ClassicSimilarity], result of:
            0.13480593 = score(doc=127,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41106653 = fieldWeight in 127, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=127)
        0.33333334 = coord(1/3)
      0.032142926 = product of:
        0.06428585 = sum of:
          0.06428585 = weight(_text_:work in 127) [ClassicSimilarity], result of:
            0.06428585 = score(doc=127,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28386727 = fieldWeight in 127, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=127)
        0.5 = coord(1/2)
    
    Abstract
    This article discusses how libraries can include person identifiers in the MARC format. It suggests using URIs in fields and subfields to help transition the data to an RDF model, and to help prepare the catalog for a Linked Data. It analyzes the selection of URIs and Real-World Objects, and the use of tag 024 to describe person identifiers in authority records. When a creator or collaborator is identified in a work, the identifiers are transferred from authority to the bibliographic record. The article concludes that URI-based descriptions can provide a better experience for users, offering other methods of discovery.
  5. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.07
    0.0662904 = sum of:
      0.045391526 = product of:
        0.13617457 = sum of:
          0.13617457 = weight(_text_:objects in 5757) [ClassicSimilarity], result of:
            0.13617457 = score(doc=5757,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41523993 = fieldWeight in 5757, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5757)
        0.33333334 = coord(1/3)
      0.020898875 = product of:
        0.04179775 = sum of:
          0.04179775 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
            0.04179775 = score(doc=5757,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 5757, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5757)
        0.5 = coord(1/2)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  6. Soos, C.; Leazer, H.H.: Presentations of authorship in knowledge organization (2020) 0.06
    0.06456591 = sum of:
      0.032096654 = product of:
        0.09628996 = sum of:
          0.09628996 = weight(_text_:objects in 21) [ClassicSimilarity], result of:
            0.09628996 = score(doc=21,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.29361898 = fieldWeight in 21, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=21)
        0.33333334 = coord(1/3)
      0.032469258 = product of:
        0.064938515 = sum of:
          0.064938515 = weight(_text_:work in 21) [ClassicSimilarity], result of:
            0.064938515 = score(doc=21,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.28674924 = fieldWeight in 21, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=21)
        0.5 = coord(1/2)
    
    Abstract
    The "author" is a concept central to many publication and documentation practices, often carrying legal, professional, social, and personal importance. Typically viewed as the solitary owner of their creations, a person is held responsible for their work and positioned to receive the praise and criticism that may emerge in its wake. Although the role of the individual within creative production is undeniable, literary (Foucault 1977; Bloom 1997) and knowledge organization (Moulaison et. al. 2014) theorists have challenged the view that the work of one person can-or should-be fully detached from their professional and personal networks. As these relationships often provide important context and reveal the role of community in the creation of new things, their absence from catalog records presents a falsely simplified view of the creative process. Here, we address the consequences of what we call the "author-asowner" concept and suggest that an "author-as-node" approach, which situates an author within their networks of influence, may allow for more relational representation within knowledge organization systems, a framing that emphasizes rather than erases the messy complexities that affect the production of new objects and ideas.
  7. Gorichanaz, T.: Sanctuary : an institutional vision for the digital age (2021) 0.06
    0.060665432 = product of:
      0.121330865 = sum of:
        0.121330865 = sum of:
          0.079533115 = weight(_text_:work in 107) [ClassicSimilarity], result of:
            0.079533115 = score(doc=107,freq=6.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.35119468 = fieldWeight in 107, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=107)
          0.04179775 = weight(_text_:22 in 107) [ClassicSimilarity], result of:
            0.04179775 = score(doc=107,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=107)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Trends in information technology and contemplative practices compel us to consider the intersections of information and contemplation. The purpose of this paper is to consider these intersections at the level of institutions. Design/methodology/approach First, the notion of institution is defined and discussed, along with information institutions and contemplative institutions. Next, sanctuary is proposed and explored as a vision for institutions in the digital age. Findings Sanctuary is a primordial human institution that has especial urgency in the digital age. This paper develops an info-contemplative framework for sanctuaries, including the elements: stability, silence, refuge, privacy and reform. Research limitations/implications This is a conceptual paper that, though guided by prior empirical and theoretical work, would benefit from application, validation and critique. This paper is meant as a starting point for discussions of institutions for the digital age. Practical implications As much as this paper is meant to prompt further research, it also provides guidance and inspiration for professionals to infuse their work with aspects of sanctuary and be attentive to the tensions inherent in sanctuary. Originality/value This paper builds on discourse at the intersection of information studies and contemplative studies, also connecting this with recent work on information institutions.
    Date
    22. 1.2021 14:20:55
  8. Siqueira, J.; Martins, D.L.: Workflow models for aggregating cultural heritage data on the web : a systematic literature review (2022) 0.06
    0.055055887 = sum of:
      0.032096654 = product of:
        0.09628996 = sum of:
          0.09628996 = weight(_text_:objects in 464) [ClassicSimilarity], result of:
            0.09628996 = score(doc=464,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.29361898 = fieldWeight in 464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=464)
        0.33333334 = coord(1/3)
      0.022959232 = product of:
        0.045918465 = sum of:
          0.045918465 = weight(_text_:work in 464) [ClassicSimilarity], result of:
            0.045918465 = score(doc=464,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 464, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=464)
        0.5 = coord(1/2)
    
    Abstract
    In recent years, different cultural institutions have made efforts to spread culture through the construction of a unique search interface that integrates their digital objects and facilitates data retrieval for lay users. However, integrating cultural data is not a trivial task; therefore, this work performs a systematic literature review on data aggregation workflows, in order to answer five questions: What are the projects? What are the planned steps? Which technologies are used? Are the steps performed manually, automatically, or semi-automatically? Which perform semantic search? The searches were carried out in three databases: Networked Digital Library of Theses and Dissertations, Scopus and Web of Science. In Q01, 12 projects were selected. In Q02, 9 stages were identified: Harvesting, Ingestion, Mapping, Indexing, Storing, Monitoring, Enriching, Displaying, and Publishing LOD. In Q03, 19 different technologies were found it. In Q04, we identified that most of the solutions are semi-automatic and, in Q05, that most of them perform a semantic search. The analysis of the workflows allowed us to identify that there is no consensus regarding the stages, their nomenclatures, and technologies, besides presenting superficial discussions. But it allowed to identify the main steps for the implementation of the aggregation of cultural data.
  9. Velios, A.; St.John, K.: Linked conservation data: : the adoption and use of vocabularies in the field of heritage conservation for publishing conservation records as linked data (2021) 0.06
    0.055055887 = sum of:
      0.032096654 = product of:
        0.09628996 = sum of:
          0.09628996 = weight(_text_:objects in 580) [ClassicSimilarity], result of:
            0.09628996 = score(doc=580,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.29361898 = fieldWeight in 580, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=580)
        0.33333334 = coord(1/3)
      0.022959232 = product of:
        0.045918465 = sum of:
          0.045918465 = weight(_text_:work in 580) [ClassicSimilarity], result of:
            0.045918465 = score(doc=580,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 580, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=580)
        0.5 = coord(1/2)
    
    Abstract
    One of the fundamental roles of memory organisations is to safe-keep collections and this includes activities around their preservation and conservation. Conservators produce documentation records of their work to assist future interpretation of objects and to explain decision making for conservation. This documentation may exist as structured data or free text and in both cases they require vocabularies that can be understood widely in the domain. This paper describes a survey of conservation professionals which allowed us to compile the vocabularies used in the domain. It includes an analysis of the vocabularies with key findings: a) the overlapping terms with multiple definitions, b) the partial coverage of the domain which is lacking controlled vocabularies for condition types and treatment techniques and c) the limited formats in which vocabularies are published, making them difficult to use within Linked Data implementations. The paper then describes an approach to improve the vocabulary landscape in conservation by providing guidelines for encoding and aligning vocabularies as well as considering third party platforms for sharing vocabularies in a sustainable way. The paper concludes with a summary of our findings and recommendations.
  10. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.05
    0.052629728 = product of:
      0.105259456 = sum of:
        0.105259456 = sum of:
          0.05510216 = weight(_text_:work in 5620) [ClassicSimilarity], result of:
            0.05510216 = score(doc=5620,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 5620, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=5620)
          0.050157297 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
            0.050157297 = score(doc=5620,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 5620, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5620)
      0.5 = coord(1/2)
    
    Abstract
    There is a mutual resemblance between the behavior of users of the Stack Exchange and the dynamics of the citations accumulation process in the scientific community, which enabled us to tackle the outwardly intractable problem of assessing the impact of introducing "negative" citations. Although the most frequent reason to cite an article is to highlight the connection between the 2 publications, researchers sometimes mention an earlier work to cast a negative light. While computing citation-based scores, for instance, the h-index, information about the reason why an article was mentioned is neglected. Therefore, it can be questioned whether these indices describe scientific achievements accurately. In this article we shed insight into the problem of "negative" citations, analyzing data from Stack Exchange and, to draw more universal conclusions, we derive an approximation of citations scores. Here we show that the quantified influence of introducing negative citations is of lesser importance and that they could be used as an indicator of where the attention of the scientific community is allocated.
    Date
    6. 1.2020 18:10:22
  11. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.05
    0.052629728 = product of:
      0.105259456 = sum of:
        0.105259456 = sum of:
          0.05510216 = weight(_text_:work in 5996) [ClassicSimilarity], result of:
            0.05510216 = score(doc=5996,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
          0.050157297 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
            0.050157297 = score(doc=5996,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 5996, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5996)
      0.5 = coord(1/2)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  12. Ma, Y.: Relatedness and compatibility : the concept of privacy in Mandarin Chinese and American English corpora (2023) 0.05
    0.052629728 = product of:
      0.105259456 = sum of:
        0.105259456 = sum of:
          0.05510216 = weight(_text_:work in 887) [ClassicSimilarity], result of:
            0.05510216 = score(doc=887,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 887, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=887)
          0.050157297 = weight(_text_:22 in 887) [ClassicSimilarity], result of:
            0.050157297 = score(doc=887,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 887, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=887)
      0.5 = coord(1/2)
    
    Abstract
    This study investigates how privacy as an ethical concept exists in two languages: Mandarin Chinese and American English. The exploration relies on two genres of corpora from 10 years: social media posts and news articles, 2010-2019. A mixed-methods approach combining structural topic modeling (STM) and human interpretation were used to work with the data. Findings show various privacy-related topics across the two languages. Moreover, some of these different topics revealed fundamental incompatibilities for understanding privacy across these two languages. In other words, some of the variations of topics do not just reflect contextual differences; they reveal how the two languages value privacy in different ways that can relate back to the society's ethical tradition. This study is one of the first empirically grounded intercultural explorations of the concept of privacy. It has shown that natural language is promising to operationalize intercultural and comparative privacy research, and it provides an examination of the concept as it is understood in these two languages.
    Date
    22. 1.2023 18:59:40
  13. Li, G.; Siddharth, L.; Luo, J.: Embedding knowledge graph of patent metadata to measure knowledge proximity (2023) 0.05
    0.052629728 = product of:
      0.105259456 = sum of:
        0.105259456 = sum of:
          0.05510216 = weight(_text_:work in 920) [ClassicSimilarity], result of:
            0.05510216 = score(doc=920,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 920, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=920)
          0.050157297 = weight(_text_:22 in 920) [ClassicSimilarity], result of:
            0.050157297 = score(doc=920,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 920, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=920)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge proximity refers to the strength of association between any two entities in a structural form that embodies certain aspects of a knowledge base. In this work, we operationalize knowledge proximity within the context of the US Patent Database (knowledge base) using a knowledge graph (structural form) named "PatNet" built using patent metadata, including citations, inventors, assignees, and domain classifications. We train various graph embedding models using PatNet to obtain the embeddings of entities and relations. The cosine similarity between the corresponding (or transformed) embeddings of entities denotes the knowledge proximity between these. We compare the embedding models in terms of their performances in predicting target entities and explaining domain expansion profiles of inventors and assignees. We then apply the embeddings of the best-preferred model to associate homogeneous (e.g., patent-patent) and heterogeneous (e.g., inventor-assignee) pairs of entities.
    Date
    22. 3.2023 12:06:55
  14. Hartel, J.: ¬The red thread of information (2020) 0.04
    0.043858107 = product of:
      0.087716214 = sum of:
        0.087716214 = sum of:
          0.045918465 = weight(_text_:work in 5839) [ClassicSimilarity], result of:
            0.045918465 = score(doc=5839,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 5839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5839)
          0.04179775 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
            0.04179775 = score(doc=5839,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 5839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5839)
      0.5 = coord(1/2)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
  15. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.04
    0.043858107 = product of:
      0.087716214 = sum of:
        0.087716214 = sum of:
          0.045918465 = weight(_text_:work in 5844) [ClassicSimilarity], result of:
            0.045918465 = score(doc=5844,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 5844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5844)
          0.04179775 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
            0.04179775 = score(doc=5844,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 5844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5844)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  16. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.04
    0.043858107 = product of:
      0.087716214 = sum of:
        0.087716214 = sum of:
          0.045918465 = weight(_text_:work in 178) [ClassicSimilarity], result of:
            0.045918465 = score(doc=178,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
          0.04179775 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
            0.04179775 = score(doc=178,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  17. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.04
    0.043858107 = product of:
      0.087716214 = sum of:
        0.087716214 = sum of:
          0.045918465 = weight(_text_:work in 950) [ClassicSimilarity], result of:
            0.045918465 = score(doc=950,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.20276234 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.04179775 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.04179775 = score(doc=950,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.5 = coord(1/2)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  18. Yu, L.; Fan, Z.; Li, A.: ¬A hierarchical typology of scholarly information units : based on a deduction-verification study (2020) 0.04
    0.042694505 = product of:
      0.08538901 = sum of:
        0.08538901 = sum of:
          0.051950812 = weight(_text_:work in 5655) [ClassicSimilarity], result of:
            0.051950812 = score(doc=5655,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2293994 = fieldWeight in 5655, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
          0.0334382 = weight(_text_:22 in 5655) [ClassicSimilarity], result of:
            0.0334382 = score(doc=5655,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.15476047 = fieldWeight in 5655, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5655)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to lay a theoretical foundation for identifying operational information units for library and information professional activities in the context of scholarly communication. Design/methodology/approach The study adopts a deduction-verification approach to formulate a typology of units for scholarly information. It first deduces possible units from an existing conceptualization of information, which defines information as the combined product of data and meaning, and then tests the usefulness of these units via two empirical investigations, one with a group of scholarly papers and the other with a sample of scholarly information users. Findings The results show that, on defining an information unit as a piece of information that is complete in both data and meaning, to such an extent that it remains meaningful to its target audience when retrieved and displayed independently in a database, it is then possible to formulate a hierarchical typology of units for scholarly information. The typology proposed in this study consists of three levels, which in turn, consists of 1, 5 and 44 units, respectively. Research limitations/implications The result of this study has theoretical implications on both the philosophical and conceptual levels: on the philosophical level, it hinges on, and reinforces the objective view of information; on the conceptual level, it challenges the conceptualization of work by IFLA's Functional Requirements for Bibliographic Records and Library Reference Model but endorses that by Library of Congress's BIBFRAME 2.0 model. Practical implications It calls for reconsideration of existing operational units in a variety of library and information activities. Originality/value The study strengthens the conceptual foundation of operational information units and brings to light the primacy of "one work" as an information unit and the possibility for it to be supplemented by smaller units.
    Date
    14. 1.2020 11:15:22
  19. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.04
    0.042694505 = product of:
      0.08538901 = sum of:
        0.08538901 = sum of:
          0.051950812 = weight(_text_:work in 566) [ClassicSimilarity], result of:
            0.051950812 = score(doc=566,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2293994 = fieldWeight in 566, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
          0.0334382 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
            0.0334382 = score(doc=566,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.15476047 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  20. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.04
    0.042694505 = product of:
      0.08538901 = sum of:
        0.08538901 = sum of:
          0.051950812 = weight(_text_:work in 1003) [ClassicSimilarity], result of:
            0.051950812 = score(doc=1003,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2293994 = fieldWeight in 1003, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
          0.0334382 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
            0.0334382 = score(doc=1003,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.15476047 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
      0.5 = coord(1/2)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Date
    22. 6.2023 18:25:18

Languages

  • e 260
  • d 31
  • pt 2
  • More… Less…

Types

  • a 279
  • el 33
  • m 6
  • p 4
  • x 1
  • More… Less…