Search (681 results, page 1 of 35)

  • × year_i:[2020 TO 2030}
  1. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S.; Wang, Z.: ChatGPT and a new academic reality : artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing (2023) 0.05
    0.051889688 = product of:
      0.103779376 = sum of:
        0.06671549 = weight(_text_:processing in 943) [ClassicSimilarity], result of:
          0.06671549 = score(doc=943,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 943, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=943)
        0.03706389 = product of:
          0.055595834 = sum of:
            0.019974224 = weight(_text_:science in 943) [ClassicSimilarity], result of:
              0.019974224 = score(doc=943,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=943)
            0.03562161 = weight(_text_:29 in 943) [ClassicSimilarity], result of:
              0.03562161 = score(doc=943,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.23319192 = fieldWeight in 943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=943)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    This article discusses OpenAI's ChatGPT, a generative pre-trained transformer, which uses natural language processing to fulfill text-based user requests (i.e., a "chatbot"). The history and principles behind ChatGPT and similar models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues that could arise with the emergence of large language models like GPT-3, the underlying technology behind ChatGPT, and its usage by academics and researchers, are discussed and situated within the context of broader advancements in artificial intelligence, machine learning, and natural language processing for research and scholarly publishing.
    Date
    19. 4.2023 19:29:44
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.5, S.570-581
  2. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.04
    0.037308738 = product of:
      0.074617475 = sum of:
        0.03931248 = weight(_text_:processing in 1012) [ClassicSimilarity], result of:
          0.03931248 = score(doc=1012,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.035304993 = product of:
          0.05295749 = sum of:
            0.023539849 = weight(_text_:science in 1012) [ClassicSimilarity], result of:
              0.023539849 = score(doc=1012,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 1012, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
            0.029417641 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.029417641 = score(doc=1012,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.759-774
  3. Wang, X.; Zhang, M.; Fan, W.; Zhao, K.: Understanding the spread of COVID-19 misinformation on social media : the effects of topics and a political leader's nudge (2022) 0.04
    0.03668678 = product of:
      0.07337356 = sum of:
        0.06671549 = weight(_text_:processing in 549) [ClassicSimilarity], result of:
          0.06671549 = score(doc=549,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 549, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=549)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 549) [ClassicSimilarity], result of:
              0.019974224 = score(doc=549,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=549)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The spread of misinformation on social media has become a major societal issue during recent years. In this work, we used the ongoing COVID-19 pandemic as a case study to systematically investigate factors associated with the spread of multi-topic misinformation related to one event on social media based on the heuristic-systematic model. Among factors related to systematic processing of information, we discovered that the topics of a misinformation story matter, with conspiracy theories being the most likely to be retweeted. As for factors related to heuristic processing of information, such as when citizens look up to their leaders during such a crisis, our results demonstrated that behaviors of a political leader, former US President Donald J. Trump, may have nudged people's sharing of COVID-19 misinformation. Outcomes of this study help social media platform and users better understand and prevent the spread of misinformation on social media.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.5, S.726-737
  4. Butlin, P.; Long, R.; Elmoznino, E.; Bengio, Y.; Birch, J.; Constant, A.; Deane, G.; Fleming, S.M.; Frith, C.; Ji, X.; Kanai, R.; Klein, C.; Lindsay, G.; Michel, M.; Mudrik, L.; Peters, M.A.K.; Schwitzgebel, E.; Simon, J.; VanRullen, R.: Consciousness in artificial intelligence : insights from the science of consciousness (2023) 0.04
    0.03668678 = product of:
      0.07337356 = sum of:
        0.06671549 = weight(_text_:processing in 1214) [ClassicSimilarity], result of:
          0.06671549 = score(doc=1214,freq=4.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3795138 = fieldWeight in 1214, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=1214)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 1214) [ClassicSimilarity], result of:
              0.019974224 = score(doc=1214,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 1214, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1214)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
  5. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.04
    0.03509953 = product of:
      0.07019906 = sum of:
        0.03931248 = weight(_text_:processing in 1042) [ClassicSimilarity], result of:
          0.03931248 = score(doc=1042,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 1042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1042)
        0.030886576 = product of:
          0.046329863 = sum of:
            0.016645188 = weight(_text_:science in 1042) [ClassicSimilarity], result of:
              0.016645188 = score(doc=1042,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 1042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1042)
            0.029684676 = weight(_text_:29 in 1042) [ClassicSimilarity], result of:
              0.029684676 = score(doc=1042,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 1042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1042)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    Natural language processing techniques can be used to analyze the linguistic content of a document to extract missing pieces of metadata. However, accurate metadata extraction may not depend solely on the linguistics, but also on structural problems such as extremely large documents, unordered multi-file documents, and inconsistency in manually labeled metadata. In this work, we start from two standard machine learning solutions to extract pieces of metadata from Environmental Impact Statements, environmental policy documents that are regularly produced under the US National Environmental Policy Act of 1969. We present a series of experiments where we evaluate how these standard approaches are affected by different issues derived from real-world data. We find that metadata extraction can be strongly influenced by nonlinguistic factors such as document length and volume ordering and that the standard machine learning solutions often do not scale well to long documents. We demonstrate how such solutions can be better adapted to these scenarios, and conclude with suggestions for other NLP practitioners cataloging large document collections.
    Date
    29. 8.2023 19:21:01
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.9, S.1124-1139
  6. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.033685315 = product of:
      0.06737063 = sum of:
        0.05747574 = product of:
          0.1724272 = sum of:
            0.1724272 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.1724272 = score(doc=5669,freq=2.0), product of:
                0.36816013 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.009894893 = product of:
          0.029684676 = sum of:
            0.029684676 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.029684676 = score(doc=5669,freq=2.0), product of:
                0.15275662 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  7. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.031512067 = product of:
      0.06302413 = sum of:
        0.05747574 = product of:
          0.1724272 = sum of:
            0.1724272 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.1724272 = score(doc=1000,freq=2.0), product of:
                0.36816013 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043425296 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 1000) [ClassicSimilarity], result of:
              0.016645188 = score(doc=1000,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  8. Greenberg, J.; Zhao, X.; Monselise, M.; Grabus, S.; Boone, J.: Knowledge organization systems : a network for AI with helping interdisciplinary vocabulary engineering (2021) 0.03
    0.03140261 = product of:
      0.06280522 = sum of:
        0.05503747 = weight(_text_:processing in 719) [ClassicSimilarity], result of:
          0.05503747 = score(doc=719,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.3130829 = fieldWeight in 719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0546875 = fieldNorm(doc=719)
        0.0077677546 = product of:
          0.023303263 = sum of:
            0.023303263 = weight(_text_:science in 719) [ClassicSimilarity], result of:
              0.023303263 = score(doc=719,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20372227 = fieldWeight in 719, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=719)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Knowledge Organization Systems (KOS) as networks of knowledge have the potential to inform AI operations. This paper explores natural language processing and machine learning in the context of KOS and Helping Interdisciplinary Vocabulary Engineering (HIVE) technology. The paper presents three use cases: HIVE and Historical Knowledge Networks, HIVE for Materials Science (HIVE-4-MAT), and Using HIVE to Enhance and Explore Medical Ontologies. The background section reviews AI foundations, while the use cases provide a frame of reference for discussing current progress and implications of connecting KOS to AI in digital resource collections.
  9. Hjoerland, B.: Science, Part I : basic conceptions of science and the scientific method (2021) 0.03
    0.03109455 = product of:
      0.0621891 = sum of:
        0.03931248 = weight(_text_:processing in 594) [ClassicSimilarity], result of:
          0.03931248 = score(doc=594,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 594, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=594)
        0.02287662 = product of:
          0.06862986 = sum of:
            0.06862986 = weight(_text_:science in 594) [ClassicSimilarity], result of:
              0.06862986 = score(doc=594,freq=34.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.59997743 = fieldWeight in 594, product of:
                  5.8309517 = tf(freq=34.0), with freq of:
                    34.0 = termFreq=34.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=594)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This article is the first in a trilogy about the concept "science". Section 1 considers the historical development of the meaning of the term science and shows its close relation to the terms "knowl­edge" and "philosophy". Section 2 presents four historic phases in the basic conceptualizations of science (1) science as representing absolute certain of knowl­edge based on deductive proof; (2) science as representing absolute certain of knowl­edge based on "the scientific method"; (3) science as representing fallible knowl­edge based on "the scientific method"; (4) science without a belief in "the scientific method" as constitutive, hence the question about the nature of science becomes dramatic. Section 3 presents four basic understandings of the scientific method: Rationalism, which gives priority to a priori thinking; empiricism, which gives priority to the collection, description, and processing of data in a neutral way; historicism, which gives priority to the interpretation of data in the light of "paradigm" and pragmatism, which emphasizes the analysis of the purposes, consequences, and the interests of knowl­edge. The second article in the trilogy focus on different fields studying science, while the final article presets further developments in the concept of science and the general conclusion. Overall, the trilogy illuminates the most important tensions in different conceptualizations of science and argues for the role of information science and knowl­edge organization in the study of science and suggests how "science" should be understood as an object of research in these fields.
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
  10. Urs, S.R.; Minhaj, M.: Evolution of data science and its education in iSchools : an impressionistic study using curriculum analysis (2023) 0.03
    0.026996078 = product of:
      0.053992156 = sum of:
        0.03931248 = weight(_text_:processing in 960) [ClassicSimilarity], result of:
          0.03931248 = score(doc=960,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 960, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=960)
        0.014679677 = product of:
          0.04403903 = sum of:
            0.04403903 = weight(_text_:science in 960) [ClassicSimilarity], result of:
              0.04403903 = score(doc=960,freq=14.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.38499892 = fieldWeight in 960, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=960)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Data Science (DS) has emerged from the shadows of its parents-statistics and computer science-into an independent field since its origin nearly six decades ago. Its evolution and education have taken many sharp turns. We present an impressionistic study of the evolution of DS anchored to Kuhn's four stages of paradigm shifts. First, we construct the landscape of DS based on curriculum analysis of the 32 iSchools across the world offering graduate-level DS programs. Second, we paint the "field" as it emerges from the word frequency patterns, ranking, and clustering of course titles based on text mining. Third, we map the curriculum to the landscape of DS and project the same onto the Edison Data Science Framework (2017) and ACM Data Science Knowledge Areas (2021). Our study shows that the DS programs of iSchools align well with the field and correspond to the Knowledge Areas and skillsets. iSchool's DS curriculums exhibit a bias toward "data visualization" along with machine learning, data mining, natural language processing, and artificial intelligence; go light on statistics; slanted toward ontologies and health informatics; and surprisingly minimal thrust toward eScience/research data management, which we believe would add a distinctive iSchool flavor to the DS.
    Footnote
    Beitrag in einem Special issue on "Data Science in the iField".
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.6, S.606-622
  11. Tramullas, J.; Garrido-Picazo, P.; Sánchez-Casabón, A.I.: Use of Wikipedia categories on information retrieval research : a brief review (2020) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 5365) [ClassicSimilarity], result of:
          0.04717497 = score(doc=5365,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 5365, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=5365)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 5365) [ClassicSimilarity], result of:
              0.019974224 = score(doc=5365,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 5365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5365)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Wikipedia categories, a classification scheme built for organizing and describing Wikpedia articles, are being applied in computer science research. This paper adopts a systematic literature review approach, in order to identify different approaches and uses of Wikipedia categories in information retrieval research. Several types of work are identified, depending on the intrinsic study of the categories structure, or its use as a tool for the processing and analysis of other documentary corpus different to Wikipedia. Information retrieval is identified as one of the major areas of use, in particular its application in the refinement and improvement of search expressions, and the construction of textual corpus. However, the set of available works shows that in many cases research approaches applied and results obtained can be integrated into a comprehensive and inclusive concept of information retrieval.
  12. James, J.E.: Pirate open access as electronic civil disobedience : is it ethical to breach the paywalls of monetized academic publishing? (2020) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 37) [ClassicSimilarity], result of:
          0.04717497 = score(doc=37,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 37, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=37)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 37) [ClassicSimilarity], result of:
              0.019974224 = score(doc=37,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 37, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=37)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Open access has long been an ideal of academic publishing. Yet, contrary to initial expectations, cost of access to published scientific knowledge increased following the advent of the Internet and electronic processing. An analysis of the ethicality of current arrangements in academic publishing shows that monetization and the sequestering of scientific knowledge behind paywalls breach the principle of fairness and damage public interest. Following decades of failed effort to redress the situation, there are ethical grounds for consumers of scientific knowledge to invoke the right of collective civil disobedience, including support for pirate open access. Could this be the best option available to consumers of scientific knowledge for removing paywalls to knowledge that rightly belongs in the public domain?
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.12, S.1500-1504
  13. Li, W.; Zheng, Y.; Zhan, Y.; Feng, R.; Zhang, T.; Fan, W.: Cross-modal retrieval with dual multi-angle self-attention (2021) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 67) [ClassicSimilarity], result of:
          0.04717497 = score(doc=67,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 67, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=67)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 67) [ClassicSimilarity], result of:
              0.019974224 = score(doc=67,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 67, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=67)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, cross-modal retrieval has been a popular research topic in both fields of computer vision and natural language processing. There is a huge semantic gap between different modalities on account of heterogeneous properties. How to establish the correlation among different modality data faces enormous challenges. In this work, we propose a novel end-to-end framework named Dual Multi-Angle Self-Attention (DMASA) for cross-modal retrieval. Multiple self-attention mechanisms are applied to extract fine-grained features for both images and texts from different angles. We then integrate coarse-grained and fine-grained features into a multimodal embedding space, in which the similarity degrees between images and texts can be directly compared. Moreover, we propose a special multistage training strategy, in which the preceding stage can provide a good initial value for the succeeding stage and make our framework work better. Very promising experimental results over the state-of-the-art methods can be achieved on three benchmark datasets of Flickr8k, Flickr30k, and MSCOCO.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.1, S.46-65
  14. Fang, Z.; Dudek, J.; Costas, R.: Facing the volatility of tweets in altmetric research (2022) 0.03
    0.026916523 = product of:
      0.053833045 = sum of:
        0.04717497 = weight(_text_:processing in 605) [ClassicSimilarity], result of:
          0.04717497 = score(doc=605,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.26835677 = fieldWeight in 605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.046875 = fieldNorm(doc=605)
        0.006658075 = product of:
          0.019974224 = sum of:
            0.019974224 = weight(_text_:science in 605) [ClassicSimilarity], result of:
              0.019974224 = score(doc=605,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.17461908 = fieldWeight in 605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=605)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    The data re-collection for tweets from data snapshots is a common methodological step in Twitter-based research. Understanding better the volatility of tweets over time is important for validating the reliability of metrics based on Twitter data. We tracked a set of 37,918 original scholarly tweets mentioning COVID-19-related research daily for 56 days and captured the reasons for the changes in their availability over time. Results show that the proportion of unavailable tweets increased from 1.6 to 2.6% in the time window observed. Of the 1,323 tweets that became unavailable at some point in the period observed, 30.5% became available again afterwards. "Revived" tweets resulted mainly from the unprotecting, reactivating, or unsuspending of users' accounts. Our findings highlight the importance of noting this dynamic nature of Twitter data in altmetric research and testify to the challenges that this poses for the retrieval, processing, and interpretation of Twitter data about scientific papers.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.8, S.1192-1195
  15. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.02
    0.02455918 = product of:
      0.04911836 = sum of:
        0.03931248 = weight(_text_:processing in 1171) [ClassicSimilarity], result of:
          0.03931248 = score(doc=1171,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.0098058805 = product of:
          0.029417641 = sum of:
            0.029417641 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.029417641 = score(doc=1171,freq=2.0), product of:
                0.15206799 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043425296 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
  16. Singh, V.K.; Ghosh, I.; Sonagara, D.: Detecting fake news stories via multimodal analysis (2021) 0.02
    0.024461292 = product of:
      0.048922583 = sum of:
        0.03931248 = weight(_text_:processing in 88) [ClassicSimilarity], result of:
          0.03931248 = score(doc=88,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 88, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=88)
        0.009610103 = product of:
          0.02883031 = sum of:
            0.02883031 = weight(_text_:science in 88) [ClassicSimilarity], result of:
              0.02883031 = score(doc=88,freq=6.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.25204095 = fieldWeight in 88, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=88)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Filtering, vetting, and verifying digital information is an area of core interest in information science. Online fake news is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. Hence, fake news detection is an important problem for information science research. While there have been multiple attempts to identify fake news, most of such efforts have focused on a single modality (e.g., only text-based or only visual features). However, news articles are increasingly framed as multimodal news stories, and hence, in this work, we propose a multimodal approach combining text and visual analysis of online news stories to automatically detect fake news. Drawing on key theories of information processing and presentation, we identify multiple text and visual features that are associated with fake or credible news articles. We then perform a predictive analysis to detect features most strongly associated with fake news. Next, we combine these features in predictive models using multiple machine-learning techniques. The experimental results indicate that a multimodal approach outperforms single-modality approaches, allowing for better fake news detection.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.1, S.3-17
  17. Andrushchenko, M.; Sandberg, K.; Turunen, R.; Marjanen, J.; Hatavara, M.; Kurunmäki, J.; Nummenmaa, T.; Hyvärinen, M.; Teräs, K.; Peltonen, J.; Nummenmaa, J.: Using parsed and annotated corpora to analyze parliamentarians' talk in Finland (2022) 0.02
    0.023579547 = product of:
      0.047159094 = sum of:
        0.03931248 = weight(_text_:processing in 471) [ClassicSimilarity], result of:
          0.03931248 = score(doc=471,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 471) [ClassicSimilarity], result of:
              0.023539849 = score(doc=471,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 471, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=471)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    We present a search system for grammatically analyzed corpora of Finnish parliamentary records and interviews with former parliamentarians, annotated with metadata of talk structure and involved parliamentarians, and discuss their use through carefully chosen digital humanities case studies. We first introduce the construction, contents, and principles of use of the corpora. Then we discuss the application of the search system and the corpora to study how politicians talk about power, how ideological terms are used in political speech, and how to identify narratives in the data. All case studies stem from questions in the humanities and the social sciences, but rely on the grammatically parsed corpora in both identifying and quantifying passages of interest. Finally, the paper discusses the role of natural language processing methods for questions in the (digital) humanities. It makes the claim that a digital humanities inquiry of parliamentary speech and interviews with politicians cannot only rely on computational humanities modeling, but needs to accommodate a range of perspectives starting with simple searches, quantitative exploration, and ending with modeling. Furthermore, the digital humanities need a more thorough discussion about how the utilization of tools from information science and technologies alter the research questions posed in the humanities.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.288-302
  18. Suissa, O.; Elmalech, A.; Zhitomirsky-Geffet, M.: Text analysis using deep neural networks in digital humanities and information science (2022) 0.02
    0.023579547 = product of:
      0.047159094 = sum of:
        0.03931248 = weight(_text_:processing in 491) [ClassicSimilarity], result of:
          0.03931248 = score(doc=491,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 491, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=491)
        0.007846616 = product of:
          0.023539849 = sum of:
            0.023539849 = weight(_text_:science in 491) [ClassicSimilarity], result of:
              0.023539849 = score(doc=491,freq=4.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.20579056 = fieldWeight in 491, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=491)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Combining computational technologies and humanities is an ongoing effort aimed at making resources such as texts, images, audio, video, and other artifacts digitally available, searchable, and analyzable. In recent years, deep neural networks (DNN) dominate the field of automatic text analysis and natural language processing (NLP), in some cases presenting a super-human performance. DNNs are the state-of-the-art machine learning algorithms solving many NLP tasks that are relevant for Digital Humanities (DH) research, such as spell checking, language detection, entity extraction, author detection, question answering, and other tasks. These supervised algorithms learn patterns from a large number of "right" and "wrong" examples and apply them to new examples. However, using DNNs for analyzing the text resources in DH research presents two main challenges: (un)availability of training data and a need for domain adaptation. This paper explores these challenges by analyzing multiple use-cases of DH studies in recent literature and their possible solutions and lays out a practical decision model for DH experts for when and how to choose the appropriate deep learning approaches for their research. Moreover, in this paper, we aim to raise awareness of the benefits of utilizing deep learning models in the DH community.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.268-287
  19. Hertzum, M.: Information seeking by experimentation : trying something out to discover what happens (2023) 0.02
    0.02272425 = product of:
      0.090897 = sum of:
        0.090897 = sum of:
          0.019974224 = weight(_text_:science in 915) [ClassicSimilarity], result of:
            0.019974224 = score(doc=915,freq=2.0), product of:
              0.11438741 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.043425296 = queryNorm
              0.17461908 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
          0.03562161 = weight(_text_:29 in 915) [ClassicSimilarity], result of:
            0.03562161 = score(doc=915,freq=2.0), product of:
              0.15275662 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.043425296 = queryNorm
              0.23319192 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
          0.035301168 = weight(_text_:22 in 915) [ClassicSimilarity], result of:
            0.035301168 = score(doc=915,freq=2.0), product of:
              0.15206799 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043425296 = queryNorm
              0.23214069 = fieldWeight in 915, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=915)
      0.25 = coord(1/4)
    
    Date
    21. 3.2023 19:22:29
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.4, S.383-387
  20. Zhang, P.; Soergel, D.: Cognitive mechanisms in sensemaking : a qualitative user study (2020) 0.02
    0.022430437 = product of:
      0.044860873 = sum of:
        0.03931248 = weight(_text_:processing in 5614) [ClassicSimilarity], result of:
          0.03931248 = score(doc=5614,freq=2.0), product of:
            0.175792 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.043425296 = queryNorm
            0.22363065 = fieldWeight in 5614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5614)
        0.005548396 = product of:
          0.016645188 = sum of:
            0.016645188 = weight(_text_:science in 5614) [ClassicSimilarity], result of:
              0.016645188 = score(doc=5614,freq=2.0), product of:
                0.11438741 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.043425296 = queryNorm
                0.1455159 = fieldWeight in 5614, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5614)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Throughout an information search, a user needs to make sense of the information found to create an understanding. This requires cognitive effort that can be demanding. Building on prior sensemaking models and expanding them with ideas from learning and cognitive psychology, we examined the use of cognitive mechanisms during individual sensemaking. We conducted a qualitative user study of 15 students who searched for and made sense of information for business analysis and news writing tasks. Through the analysis of think-aloud protocols, recordings of screen movements, intermediate work products of sensemaking, including notes and concept maps, and final reports, we observed the use of 17 data-driven and structure-driven mechanisms for processing new information, examining individual concepts and relationships, and detecting anomalies. These cognitive mechanisms, as the basic operators that move sensemaking forward, provide in-depth understanding of how people process information to produce sense. Meaningful learning and sensemaking are closely related, so our findings apply to learning as well. Our results contribute to a better understanding of the sensemaking process-how people think-and this better understanding can inform the teaching of thinking skills and the design of improved sensemaking assistants and mind tools.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.2, S.158-171

Languages

  • e 590
  • d 87
  • pt 2
  • m 1
  • sp 1
  • More… Less…

Types

  • a 643
  • el 73
  • m 18
  • p 7
  • s 4
  • x 2
  • More… Less…