Search (794 results, page 1 of 40)

  • × year_i:[2020 TO 2030}
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.21
    0.20644064 = product of:
      0.2752542 = sum of:
        0.01841403 = weight(_text_:for in 950) [ClassicSimilarity], result of:
          0.01841403 = score(doc=950,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 950, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.079913534 = weight(_text_:computing in 950) [ClassicSimilarity], result of:
          0.079913534 = score(doc=950,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.17692661 = sum of:
          0.14489865 = weight(_text_:machinery in 950) [ClassicSimilarity], result of:
            0.14489865 = score(doc=950,freq=2.0), product of:
              0.35214928 = queryWeight, product of:
                7.448392 = idf(docFreq=69, maxDocs=44218)
                0.047278564 = queryNorm
              0.4114694 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.448392 = idf(docFreq=69, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.032027967 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.032027967 = score(doc=950,freq=2.0), product of:
              0.16556148 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047278564 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.75 = coord(3/4)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  2. Geras, A.; Siudem, G.; Gagolewski, M.: Should we introduce a dislike button for academic articles? (2020) 0.10
    0.10068709 = product of:
      0.13424945 = sum of:
        0.019136423 = weight(_text_:for in 5620) [ClassicSimilarity], result of:
          0.019136423 = score(doc=5620,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 5620, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=5620)
        0.095896244 = weight(_text_:computing in 5620) [ClassicSimilarity], result of:
          0.095896244 = score(doc=5620,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 5620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=5620)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 5620) [ClassicSimilarity], result of:
              0.038433556 = score(doc=5620,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 5620, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5620)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    There is a mutual resemblance between the behavior of users of the Stack Exchange and the dynamics of the citations accumulation process in the scientific community, which enabled us to tackle the outwardly intractable problem of assessing the impact of introducing "negative" citations. Although the most frequent reason to cite an article is to highlight the connection between the 2 publications, researchers sometimes mention an earlier work to cast a negative light. While computing citation-based scores, for instance, the h-index, information about the reason why an article was mentioned is neglected. Therefore, it can be questioned whether these indices describe scientific achievements accurately. In this article we shed insight into the problem of "negative" citations, analyzing data from Stack Exchange and, to draw more universal conclusions, we derive an approximation of citations scores. Here we show that the quantified influence of introducing negative citations is of lesser importance and that they could be used as an indicator of where the attention of the scientific community is allocated.
    Date
    6. 1.2020 18:10:22
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.2, S.221-229
  3. Haimson, O.L.; Carter, A.J.; Corvite, S.; Wheeler, B.; Wang, L.; Liu, T.; Lige, A.: ¬The major life events taxonomy : social readjustment, social media information sharing, and online network separation during times of life transition (2021) 0.09
    0.094973646 = product of:
      0.18994729 = sum of:
        0.013020686 = weight(_text_:for in 263) [ClassicSimilarity], result of:
          0.013020686 = score(doc=263,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 263, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=263)
        0.17692661 = sum of:
          0.14489865 = weight(_text_:machinery in 263) [ClassicSimilarity], result of:
            0.14489865 = score(doc=263,freq=2.0), product of:
              0.35214928 = queryWeight, product of:
                7.448392 = idf(docFreq=69, maxDocs=44218)
                0.047278564 = queryNorm
              0.4114694 = fieldWeight in 263, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.448392 = idf(docFreq=69, maxDocs=44218)
                0.0390625 = fieldNorm(doc=263)
          0.032027967 = weight(_text_:22 in 263) [ClassicSimilarity], result of:
            0.032027967 = score(doc=263,freq=2.0), product of:
              0.16556148 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047278564 = queryNorm
              0.19345059 = fieldWeight in 263, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=263)
      0.5 = coord(2/4)
    
    Abstract
    When people experience major life changes, this often impacts their self-presentation, networks, and online behavior in substantial ways. To effectively study major life transitions and events, we surveyed a large U.S. sample (n = 554) to create the Major Life Events Taxonomy, a list of 121 life events in 12 categories. We then applied this taxonomy to a second large U.S. survey sample (n = 775) to understand on average how much social readjustment each event required, how likely each event was to be shared on social media with different types of audiences, and how much online network separation each involved. We found that social readjustment is positively correlated with sharing on social media, with both broad audiences and close ties as well as in online spaces separate from one's network of known ties. Some life transitions involve high levels of sharing with both separate audiences and broad audiences on social media, providing evidence for what previous research has called social media as social transition machinery. Researchers can use the Major Life Events Taxonomy to examine how people's life transition experiences relate to their behaviors, technology use, and health and well-being outcomes.
    Date
    10. 6.2021 19:22:47
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.7, S.933-947
  4. Hartel, J.: ¬The red thread of information (2020) 0.09
    0.09306681 = product of:
      0.18613362 = sum of:
        0.009207015 = weight(_text_:for in 5839) [ClassicSimilarity], result of:
          0.009207015 = score(doc=5839,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.103720546 = fieldWeight in 5839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5839)
        0.17692661 = sum of:
          0.14489865 = weight(_text_:machinery in 5839) [ClassicSimilarity], result of:
            0.14489865 = score(doc=5839,freq=2.0), product of:
              0.35214928 = queryWeight, product of:
                7.448392 = idf(docFreq=69, maxDocs=44218)
                0.047278564 = queryNorm
              0.4114694 = fieldWeight in 5839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.448392 = idf(docFreq=69, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5839)
          0.032027967 = weight(_text_:22 in 5839) [ClassicSimilarity], result of:
            0.032027967 = score(doc=5839,freq=2.0), product of:
              0.16556148 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047278564 = queryNorm
              0.19345059 = fieldWeight in 5839, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5839)
      0.5 = coord(2/4)
    
    Abstract
    Purpose In The Invisible Substrate of Information Science, a landmark article about the discipline of information science, Marcia J. Bates wrote that ".we are always looking for the red thread of information in the social texture of people's lives" (1999a, p. 1048). To sharpen our understanding of information science and to elaborate Bates' idea, the work at hand answers the question: Just what does the red thread of information entail? Design/methodology/approach Through a close reading of Bates' oeuvre and by applying concepts from the reference literature of information science, nine composite entities that qualify as the red thread of information are identified, elaborated, and related to existing concepts in the information science literature. In the spirit of a scientist-poet (White, 1999), several playful metaphors related to the color red are employed. Findings Bates' red thread of information entails: terms, genres, literatures, classification systems, scholarly communication, information retrieval, information experience, information institutions, and information policy. This same constellation of phenomena can be found in resonant visions of information science, namely, domain analysis (Hjørland, 2002), ethnography of infrastructure (Star, 1999), and social epistemology (Shera, 1968). Research limitations/implications With the vital vermilion filament in clear view, newcomers can more easily engage the material, conceptual, and social machinery of information science, and specialists are reminded of what constitutes information science as a whole. Future researchers and scientist-poets may wish to supplement the nine composite entities with additional, emergent information phenomena. Originality/value Though the explication of information science that follows is relatively orthodox and time-bound, the paper offers an imaginative, accessible, yet technically precise way of understanding the field.
    Date
    30. 4.2020 21:03:22
  5. Wu, Z.; Li, R.; Zhou, Z.; Guo, J.; Jiang, J.; Su, X.: ¬A user sensitive subject protection approach for book search service (2020) 0.09
    0.088860005 = product of:
      0.11848001 = sum of:
        0.022552488 = weight(_text_:for in 5617) [ClassicSimilarity], result of:
          0.022552488 = score(doc=5617,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 5617, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.079913534 = weight(_text_:computing in 5617) [ClassicSimilarity], result of:
          0.079913534 = score(doc=5617,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 5617, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5617)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 5617) [ClassicSimilarity], result of:
              0.032027967 = score(doc=5617,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 5617, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5617)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In a digital library, book search is one of the most important information services. However, with the rapid development of network technologies such as cloud computing, the server-side of a digital library is becoming more and more untrusted; thus, how to prevent the disclosure of users' book query privacy is causing people's increasingly extensive concern. In this article, we propose to construct a group of plausible fake queries for each user book query to cover up the sensitive subjects behind users' queries. First, we propose a basic framework for the privacy protection in book search, which requires no change to the book search algorithm running on the server-side, and no compromise to the accuracy of book search. Second, we present a privacy protection model for book search to formulate the constraints that ideal fake queries should satisfy, that is, (i) the feature similarity, which measures the confusion effect of fake queries on users' queries, and (ii) the privacy exposure, which measures the cover-up effect of fake queries on users' sensitive subjects. Third, we discuss the algorithm implementation for the privacy model. Finally, the effectiveness of our approach is demonstrated by theoretical analysis and experimental evaluation.
    Date
    6. 1.2020 17:22:25
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.2, S.183-195
  6. Aspray, W.; Aspray, P.: Does technology really outpace policy, and does it matter? : a primer for technical experts and others (2023) 0.08
    0.083905905 = product of:
      0.111874536 = sum of:
        0.01594702 = weight(_text_:for in 1017) [ClassicSimilarity], result of:
          0.01594702 = score(doc=1017,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 1017, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1017)
        0.079913534 = weight(_text_:computing in 1017) [ClassicSimilarity], result of:
          0.079913534 = score(doc=1017,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 1017, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1017)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 1017) [ClassicSimilarity], result of:
              0.032027967 = score(doc=1017,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 1017, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1017)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper reconsiders the outpacing argument, the belief that changes in law and other means of regulation cannot keep pace with recent changes in technology. We focus on information and communication technologies (ICTs) in and of themselves as well as applied in computer science, telecommunications, health, finance, and other applications, but our argument applies also in rapidly developing technological fields such as environmental science, materials science, and genetic engineering. First, we discuss why the outpacing argument is so closely associated with information and computing technologies. We then outline 12 arguments that support the outpacing argument, by pointing to some particular weaknesses of policy making, using the United States as the primary example. Then arguing in the opposite direction, we present 4 brief and 3 more extended criticisms of the outpacing thesis. The paper's final section responds to calls within the technical community for greater engagement of policy and ethical concerns and reviews the paper's major arguments. While the paper focuses on ICTs and policy making in the United States, our critique of the outpacing argument and our exploration of its complex character are of utility to actors in other political contexts and in other technical fields.
    Date
    22. 7.2023 13:28:28
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.8, S.885-904
  7. Makri, S.: Information informing design : Information Science research with implications for the design of digital information environments (2020) 0.06
    0.06030063 = product of:
      0.12060126 = sum of:
        0.024705013 = weight(_text_:for in 13) [ClassicSimilarity], result of:
          0.024705013 = score(doc=13,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27831143 = fieldWeight in 13, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=13)
        0.095896244 = weight(_text_:computing in 13) [ClassicSimilarity], result of:
          0.095896244 = score(doc=13,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 13, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=13)
      0.5 = coord(2/4)
    
    Abstract
    This debut curated "virtual special issue" of JASIST is on the theme of "information informing design." It comprises several excellent scholarly research articles previously published in JASIST with important implications for the design of digital information environments. It covers articles that motivate the need for Information Science research to inform design and those that have empirically examined information-related concepts such as information behavior, practices, interaction, and experience and, based on their findings, proposed recommendations or posed questions for design. This article argues that as JASIST exists at the intersection between information, systems, and users, it is natural to want to understand how people engage with information to inform design and, by doing so, Information Science research can build bridges between Information Science and computing disciplines and make contributions that transcend its discipline boundaries. It argues that Information Science research not only has the potential but also the duty to inform the design of future digital information environments.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.11, S.1402-1412
  8. Fremery, W. de; Buckland, M.K.: Copy theory (2022) 0.06
    0.05899654 = product of:
      0.11799308 = sum of:
        0.022096837 = weight(_text_:for in 487) [ClassicSimilarity], result of:
          0.022096837 = score(doc=487,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 487, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=487)
        0.095896244 = weight(_text_:computing in 487) [ClassicSimilarity], result of:
          0.095896244 = score(doc=487,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=487)
      0.5 = coord(2/4)
    
    Abstract
    In information science, writing, printing, telecommunication, and digital computing have been central concerns because of their ability to distribute information. Overlooked is the obvious fact that these technologies fashion copies, and the theorizing of copies has been neglected. We may think a copy is the same as what it copies, but no two objects can really be the same. "The same" means similar enough as an acceptable substitute for some purpose. The differences between usefully similar things are also often important, in forensic analysis, for example, or inferential processes. Status as a copy is only one form of relationship between objects, but copies are so integral to information science that they demand a theory. Indeed, theorizing copies provides a basis for a more complete and unified view of information science.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.3, S.407-418
  9. Wu, D.: Understanding task preparation and resumption behaviors in cross-device search (2020) 0.05
    0.047930278 = product of:
      0.095860556 = sum of:
        0.01594702 = weight(_text_:for in 5943) [ClassicSimilarity], result of:
          0.01594702 = score(doc=5943,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 5943, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5943)
        0.079913534 = weight(_text_:computing in 5943) [ClassicSimilarity], result of:
          0.079913534 = score(doc=5943,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 5943, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5943)
      0.5 = coord(2/4)
    
    Abstract
    It is now common for individuals to have multiple computing devices, such as laptops, smart phones, and tablets. This multidevice environment increases the popularity of cross-device search activities. Cross-device search can be seen as a special case of cross-session search. Previous studies regarded re-finding behaviors in cross-session search as task resumption. Based on this, this article proposes considering 2 phases of cross-device search: task preparation and task resumption and to explore their features by modeling. A within-subject user experiment was designed to collect data. Four groups of features were captured from specific behaviors of querying, clicking, gazing, and cognition. This article tested 3 machine-learning methods and found that the C5.0 decision tree performed best. Five features were included in the task preparation behavior model, and 3 in the task resumption behavior model. The difference and relationship between task preparation and task resumption were investigated by comparing their behavioral features. It is concluded that information need remains blurred in task preparation and becomes clear in task resumption. The changing states of information need suggest an exploratory process in cross-device search. We also identify some implications for search engine designers.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.8, S.887-901
  10. Cabanac, G.; Labbé, C.: Prevalence of nonsensical algorithmically generated papers in the scientific literature (2021) 0.05
    0.047930278 = product of:
      0.095860556 = sum of:
        0.01594702 = weight(_text_:for in 410) [ClassicSimilarity], result of:
          0.01594702 = score(doc=410,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 410, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=410)
        0.079913534 = weight(_text_:computing in 410) [ClassicSimilarity], result of:
          0.079913534 = score(doc=410,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=410)
      0.5 = coord(2/4)
    
    Abstract
    In 2014 leading publishers withdrew more than 120 nonsensical publications automatically generated with the SCIgen program. Casual observations suggested that similar problematic papers are still published and sold, without follow-up retractions. No systematic screening has been performed and the prevalence of such nonsensical publications in the scientific literature is unknown. Our contribution is 2-fold. First, we designed a detector that combs the scientific literature for grammar-based computer-generated papers. Applied to SCIgen, it has a 83.6% precision. Second, we performed a scientometric study of the 243 detected SCIgen-papers from 19 publishers. We estimate the prevalence of SCIgen-papers to be 75 per million papers in Information and Computing Sciences. Only 19% of the 243 problematic papers were dealt with: formal retraction (12) or silent removal (34). Publishers still serve and sometimes sell the remaining 197 papers without any caveat. We found evidence of citation manipulation via edited SCIgen bibliographies. This work reveals metric gaming up to the point of absurdity: fraudsters publish nonsensical algorithmically generated papers featuring genuine references. It stresses the need to screen papers for nonsense before peer-review and chase citation manipulation in published papers. Overall, this is yet another illustration of the harmful effects of the pressure to publish or perish.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.12, S.1461-1476
  11. Luhmann, J.; Burghardt, M.: Digital humanities - A discipline in its own right? : an analysis of the role and position of digital humanities in the academic landscape (2022) 0.05
    0.04646711 = product of:
      0.09293422 = sum of:
        0.013020686 = weight(_text_:for in 460) [ClassicSimilarity], result of:
          0.013020686 = score(doc=460,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 460, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=460)
        0.079913534 = weight(_text_:computing in 460) [ClassicSimilarity], result of:
          0.079913534 = score(doc=460,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=460)
      0.5 = coord(2/4)
    
    Abstract
    Although digital humanities (DH) has received a lot of attention in recent years, its status as "a discipline in its own right" (Schreibman et al., A companion to digital humanities (pp. xxiii-xxvii). Blackwell; 2004) and its position in the overall academic landscape are still being negotiated. While there are countless essays and opinion pieces that debate the status of DH, little research has been dedicated to exploring the field in a systematic and empirical way (Poole, Journal of Documentation; 2017:73). This study aims to contribute to the existing research gap by comparing articles published over the past three decades in three established English-language DH journals (Computers and the Humanities, Literary and Linguistic Computing, Digital Humanities Quarterly) with research articles from journals in 15 other academic disciplines (corpus size: 34,041 articles; 299 million tokens). As a method of analysis, we use latent Dirichlet allocation topic modeling, combined with recent approaches that aggregate topic models by means of hierarchical agglomerative clustering. Our findings indicate that DH is simultaneously a discipline in its own right and a highly interdisciplinary field, with many connecting factors to neighboring disciplines-first and foremost, computational linguistics, and information science. Detailed descriptive analyses shed some light on the diachronic development of DH and also highlight topics that are characteristic for DH.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.148-171
  12. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.045357857 = product of:
      0.090715714 = sum of:
        0.07509089 = product of:
          0.22527267 = sum of:
            0.22527267 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.22527267 = score(doc=862,freq=2.0), product of:
                0.40082818 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.047278564 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.015624823 = weight(_text_:for in 862) [ClassicSimilarity], result of:
          0.015624823 = score(doc=862,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  13. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.04
    0.03586311 = product of:
      0.07172622 = sum of:
        0.015786743 = weight(_text_:for in 923) [ClassicSimilarity], result of:
          0.015786743 = score(doc=923,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17784369 = fieldWeight in 923, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=923)
        0.055939477 = weight(_text_:computing in 923) [ClassicSimilarity], result of:
          0.055939477 = score(doc=923,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.21390139 = fieldWeight in 923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.02734375 = fieldNorm(doc=923)
      0.5 = coord(2/4)
    
    Abstract
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
    He doesn't dismiss the potential of LLMs to become useful assistants in all sorts of ways-Google and Microsoft have already announced that they will be integrating LLMs into their search and office productivity software. But he talked me through some of his criticisms of the technology's apparent capabilities. At the heart of Dr Marcus's thoughtful critique is an attempt to put LLMs into proper context. Deep learning, the underlying technology that makes LLMs work, is only one piece of the puzzle in the quest for machine intelligence. To reach the level of artificial general intelligence (AGI) that many tech companies strive for-i.e. machines that can plan, reason and solve problems in the way human brains can-they will need to deploy a suite of other AI techniques. These include, for example, the kind of "symbolic AI" that was popular before artificial neural networks and deep learning became all the rage.
    People use symbols to think about the world: if I say the words "cat", "house" or "aeroplane", you know instantly what I mean. Symbols can also be used to describe the way things are behaving (running, falling, flying) or they can represent how things should behave in relation to each other (a "+" means add the numbers before and after). Symbolic AI is a way to embed this human knowledge and reasoning into computer systems. Though the idea has been around for decades, it fell by the wayside a few years ago as deep learning-buoyed by the sudden easy availability of lots of training data and cheap computing power-became more fashionable. In the near future at least, there's no doubt people will find LLMs useful. But whether they represent a critical step on the path towards AGI, or rather just an intriguing detour, remains to be seen."
  14. Falavarjani, S.A.M.; Jovanovic, J.; Fani, H.; Ghorbani, A.A.; Noorian, Z.; Bagheri, E.: On the causal relation between real world activities and emotional expressions of social media users (2021) 0.04
    0.035648223 = product of:
      0.071296446 = sum of:
        0.0073656123 = weight(_text_:for in 243) [ClassicSimilarity], result of:
          0.0073656123 = score(doc=243,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.08297644 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=243)
        0.06393083 = weight(_text_:computing in 243) [ClassicSimilarity], result of:
          0.06393083 = score(doc=243,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.24445872 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=243)
      0.5 = coord(2/4)
    
    Abstract
    Social interactions through online social media have become a daily routine of many, and the number of those whose real world (offline) and online lives have become intertwined is continuously growing. As such, the interplay of individuals' online and offline activities has been the subject of numerous research studies, the majority of which explored the impact of people's online actions on their offline activities. The opposite direction of impact-the effect of real-world activities on online actions-has also received attention but to a lesser degree. To contribute to the latter form of impact, this paper reports on a quasi-experimental design study that examined the presence of causal relations between real-world activities of online social media users and their online emotional expressions. To this end, we have collected a large dataset (over 17K users) from Twitter and Foursquare, and systematically aligned user content on the two social media platforms. Users' Foursquare check-ins provided information about their offline activities, whereas the users' expressions of emotions and moods were derived from their Twitter posts. Since our study was based on a quasi-experimental design, to minimize the impact of covariates, we applied an innovative model of computing propensity scores. Our main findings can be summarized as follows: (a) users' offline activities do impact their affective expressions, both of emotions and moods, as evidenced in their online shared textual content; (b) the impact depends on the type of offline activity and if the user embarks on or abandons the activity. Our findings can be used to devise a personalized recommendation mechanism to help people better manage their online emotional expressions.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.6, S.723-743
  15. Manley, S.: Letters to the editor and the race for publication metrics (2022) 0.03
    0.025621045 = product of:
      0.05124209 = sum of:
        0.028822517 = weight(_text_:for in 547) [ClassicSimilarity], result of:
          0.028822517 = score(doc=547,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3246967 = fieldWeight in 547, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=547)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 547) [ClassicSimilarity], result of:
              0.04483915 = score(doc=547,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=547)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article discusses how letters to the editor boost publishing metrics for journals and authors, and then examines letters published since 2015 in six elite journals, including the Journal of the Association for Information Science and Technology. The initial findings identify some potentially anomalous use of letters and unusual self-citation patterns. The article proposes that Clarivate Analytics consider slightly reconfiguring the Journal Impact Factor to more fairly account for letters and that journals transparently explain their letter submission policies.
    Date
    6. 4.2022 19:22:26
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.5, S.702-707
  16. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.02
    0.024605174 = product of:
      0.049210347 = sum of:
        0.033196364 = weight(_text_:for in 5844) [ClassicSimilarity], result of:
          0.033196364 = score(doc=5844,freq=26.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.37396973 = fieldWeight in 5844, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5844)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
              0.032027967 = score(doc=5844,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 5844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5844)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  17. Bullard, J.; Dierking, A.; Grundner, A.: Centring LGBT2QIA+ subjects in knowledge organization systems (2020) 0.02
    0.023139883 = product of:
      0.046279766 = sum of:
        0.027062986 = weight(_text_:for in 5996) [ClassicSimilarity], result of:
          0.027062986 = score(doc=5996,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.3048749 = fieldWeight in 5996, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=5996)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 5996) [ClassicSimilarity], result of:
              0.038433556 = score(doc=5996,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 5996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5996)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This paper contains a report of two interdependent knowledge organization (KO) projects for an LGBT2QIA+ library. The authors, in the context of volunteer library work for an independent library, redesigned the classification system and subject cataloguing guidelines to centre LGBT2QIA+ subjects. We discuss the priorities of creating and maintaining knowledge organization systems for a historically marginalized community and address the challenge that queer subjectivity poses to the goals of KO. The classification system features a focus on identity and physically reorganizes the library space in a way that accounts for the multiple and overlapping labels that constitute the currently articulated boundaries of this community. The subject heading system focuses on making visible topics and elements of identity made invisible by universal systems and by the newly implemented classification system. We discuss how this project may inform KO for other marginalized subjects, particularly through process and documentation that prioritizes transparency and the acceptance of an unfinished endpoint for queer KO.
    Date
    6.10.2020 21:22:33
  18. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.02
    0.022875603 = product of:
      0.045751207 = sum of:
        0.03294002 = weight(_text_:for in 566) [ClassicSimilarity], result of:
          0.03294002 = score(doc=566,freq=40.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.37108192 = fieldWeight in 566, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=566)
        0.012811186 = product of:
          0.025622372 = sum of:
            0.025622372 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
              0.025622372 = score(doc=566,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.15476047 = fieldWeight in 566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=566)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge Architectures reviews traditional approaches to managing information and explains why they need to adapt to support 21st-century information management and discovery. Exploring the rapidly changing environment in which information is being managed and accessed, the book considers how to use knowledge architectures, the basic structures and designs that underlie all of the parts of an effective information system, to best advantage. Drawing on 40 years of work with a variety of organizations, Bedford explains that failure to understand the structure behind any given system can be the difference between an effective solution and a significant and costly failure. Demonstrating that the information user environment has shifted significantly in the past 20 years, the book explains that end users now expect designs and behaviors that are much closer to the way they think, work, and act. Acknowledging how important it is that those responsible for developing an information or knowledge management system understand knowledge structures, the book goes beyond a traditional library science perspective and uses case studies to help translate the abstract and theoretical to the practical and concrete. Explaining the structures in a simple and intuitive way and providing examples that clearly illustrate the challenges faced by a range of different organizations, Knowledge Architectures is essential reading for those studying and working in library and information science, data science, systems development, database design, and search system architecture and engineering.
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  19. Candela, G.: ¬An automatic data quality approach to assess semantic data from cultural heritage institutions (2023) 0.02
    0.0223727 = product of:
      0.0447454 = sum of:
        0.022325827 = weight(_text_:for in 997) [ClassicSimilarity], result of:
          0.022325827 = score(doc=997,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=997)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.04483915 = score(doc=997,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=997)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, cultural heritage institutions have been exploring the benefits of applying Linked Open Data to their catalogs and digital materials. Innovative and creative methods have emerged to publish and reuse digital contents to promote computational access, such as the concepts of Labs and Collections as Data. Data quality has become a requirement for researchers and training methods based on artificial intelligence and machine learning. This article explores how the quality of Linked Open Data made available by cultural heritage institutions can be automatically assessed. The results obtained can be useful for other institutions who wish to publish and assess their collections.
    Date
    22. 6.2023 18:23:31
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.866-878
  20. Kuehn, E.F.: ¬The information ecosystem concept in information literacy : a theoretical approach and definition (2023) 0.02
    0.020656807 = product of:
      0.041313615 = sum of:
        0.022096837 = weight(_text_:for in 919) [ClassicSimilarity], result of:
          0.022096837 = score(doc=919,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 919, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=919)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 919) [ClassicSimilarity], result of:
              0.038433556 = score(doc=919,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=919)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Despite the prominence of the concept of the information ecosystem (hereafter IE) in information literacy documents and literature, it is under-theorized. This article proposes a general definition of IE for information literacy. After reviewing the current use of the IE concept in the Association of College and Research Libraries (ACRL) Framework for Information Literacy and other information literacy sources, existing definitions of IE and similar concepts (e.g., "evidence ecosystems") will be examined from other fields. These will form the basis of the definition of IE proposed in the article for the field of information literacy: "all structures, entities, and agents related to the flow of semantic information relevant to a research domain, as well as the information itself."
    Date
    22. 3.2023 11:52:50
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.434-443

Languages

  • e 748
  • d 42
  • pt 3
  • More… Less…

Types

  • a 757
  • el 75
  • m 15
  • p 11
  • s 4
  • x 2
  • A 1
  • EL 1
  • More… Less…

Subjects