Search (103 results, page 1 of 6)

  • × language_ss:"e"
  • × year_i:[2020 TO 2030}
  1. Ekstrand, M.D.; Wright, K.L.; Pera, M.S.: Enhancing classroom instruction with online news (2020) 0.07
    0.07193736 = product of:
      0.14387472 = sum of:
        0.14387472 = sum of:
          0.1093608 = weight(_text_:news in 5844) [ClassicSimilarity], result of:
            0.1093608 = score(doc=5844,freq=4.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.40950692 = fieldWeight in 5844, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5844)
          0.034513917 = weight(_text_:22 in 5844) [ClassicSimilarity], result of:
            0.034513917 = score(doc=5844,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 5844, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5844)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This paper investigates how school teachers look for informational texts for their classrooms. Access to current, varied and authentic informational texts improves learning outcomes for K-12 students, but many teachers lack resources to expand and update readings. The Web offers freely available resources, but finding suitable ones is time-consuming. This research lays the groundwork for building tools to ease that burden. Design/methodology/approach This paper reports qualitative findings from a study in two stages: (1) a set of semistructured interviews, based on the critical incident technique, eliciting teachers' information-seeking practices and challenges; and (2) observations of teachers using a prototype teaching-oriented news search tool under a think-aloud protocol. Findings Teachers articulated different objectives and ways of using readings in their classrooms, goals and self-reported practices varied by experience level. Teachers struggled to formulate queries that are likely to return readings on specific course topics, instead searching directly for abstract topics. Experience differences did not translate into observable differences in search skill or success in the lab study. Originality/value There is limited work on teachers' information-seeking practices, particularly on how teachers look for texts for classroom use. This paper describes how teachers look for information in this context, setting the stage for future development and research on how to support this use case. Understanding and supporting teachers looking for information is a rich area for future research, due to the complexity of the information need and the fact that teachers are not looking for information for themselves.
    Date
    20. 1.2015 18:30:22
  2. Ma, Y.: Relatedness and compatibility : the concept of privacy in Mandarin Chinese and American English corpora (2023) 0.07
    0.0671062 = product of:
      0.1342124 = sum of:
        0.1342124 = sum of:
          0.092795715 = weight(_text_:news in 887) [ClassicSimilarity], result of:
            0.092795715 = score(doc=887,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.34747815 = fieldWeight in 887, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.046875 = fieldNorm(doc=887)
          0.041416697 = weight(_text_:22 in 887) [ClassicSimilarity], result of:
            0.041416697 = score(doc=887,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.23214069 = fieldWeight in 887, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=887)
      0.5 = coord(1/2)
    
    Abstract
    This study investigates how privacy as an ethical concept exists in two languages: Mandarin Chinese and American English. The exploration relies on two genres of corpora from 10 years: social media posts and news articles, 2010-2019. A mixed-methods approach combining structural topic modeling (STM) and human interpretation were used to work with the data. Findings show various privacy-related topics across the two languages. Moreover, some of these different topics revealed fundamental incompatibilities for understanding privacy across these two languages. In other words, some of the variations of topics do not just reflect contextual differences; they reveal how the two languages value privacy in different ways that can relate back to the society's ethical tradition. This study is one of the first empirically grounded intercultural explorations of the concept of privacy. It has shown that natural language is promising to operationalize intercultural and comparative privacy research, and it provides an examination of the concept as it is understood in these two languages.
    Date
    22. 1.2023 18:59:40
  3. Singh, V.K.; Ghosh, I.; Sonagara, D.: Detecting fake news stories via multimodal analysis (2021) 0.06
    0.06411845 = product of:
      0.1282369 = sum of:
        0.1282369 = product of:
          0.2564738 = sum of:
            0.2564738 = weight(_text_:news in 88) [ClassicSimilarity], result of:
              0.2564738 = score(doc=88,freq=22.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.9603789 = fieldWeight in 88, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Filtering, vetting, and verifying digital information is an area of core interest in information science. Online fake news is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. Hence, fake news detection is an important problem for information science research. While there have been multiple attempts to identify fake news, most of such efforts have focused on a single modality (e.g., only text-based or only visual features). However, news articles are increasingly framed as multimodal news stories, and hence, in this work, we propose a multimodal approach combining text and visual analysis of online news stories to automatically detect fake news. Drawing on key theories of information processing and presentation, we identify multiple text and visual features that are associated with fake or credible news articles. We then perform a predictive analysis to detect features most strongly associated with fake news. Next, we combine these features in predictive models using multiple machine-learning techniques. The experimental results indicate that a multimodal approach outperforms single-modality approaches, allowing for better fake news detection.
  4. Boczkowski, P.; Mitchelstein, E.: ¬The digital environment : How we live, learn, work, and play now (2021) 0.06
    0.057549886 = product of:
      0.11509977 = sum of:
        0.11509977 = sum of:
          0.087488644 = weight(_text_:news in 1003) [ClassicSimilarity], result of:
            0.087488644 = score(doc=1003,freq=4.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.32760555 = fieldWeight in 1003, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
          0.027611133 = weight(_text_:22 in 1003) [ClassicSimilarity], result of:
            0.027611133 = score(doc=1003,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.15476047 = fieldWeight in 1003, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1003)
      0.5 = coord(1/2)
    
    Abstract
    Increasingly we live through our personal screens; we work, play, socialize, and learn digitally. The shift to remote everything during the pandemic was another step in a decades-long march toward the digitization of everyday life made possible by innovations in media, information, and communication technology. In The Digital Environment, Pablo Boczkowski and Eugenia Mitchelstein offer a new way to understand the role of the digital in our daily lives, calling on us to turn our attention from our discrete devices and apps to the array of artifacts and practices that make up the digital environment that envelops every aspect of our social experience. Boczkowski and Mitchelstein explore a series of issues raised by the digital takeover of everyday life, drawing on interviews with a variety of experts. They show how existing inequities of gender, race, ethnicity, education, and class are baked into the design and deployment of technology, and describe emancipatory practices that counter this--including the use of Twitter as a platform for activism through such hashtags as #BlackLivesMatter and #MeToo. They discuss the digitization of parenting, schooling, and dating--noting, among other things, that today we can both begin and end relationships online. They describe how digital media shape our consumption of sports, entertainment, and news, and consider the dynamics of political campaigns, disinformation, and social activism. Finally, they report on developments in three areas that will be key to our digital future: data science, virtual reality, and space exploration.
    Content
    1. Three Environments, One Life -- Part I: Foundations -- 2. Mediatization -- 3. Algorithms -- 4. Race and Ethnicity -- 5. Gender -- Part II: Institutions -- 6. Parenting -- 7. Schooling -- 8. Working -- 9. Dating -- Part III: Leisure -- 10. Sports -- 11. Televised Entertainment -- 12. News -- Part IV: Politics -- 13. Misinformation and Disinformation -- 14. Electoral Campaigns -- 15. Activism -- Part V: Innovations -- 16. Data Science -- 17. Virtual Reality -- 18. Space Exploration -- 19. Bricks and Cracks in the Digital Environment
    Date
    22. 6.2023 18:25:18
  5. Ren, J.; Dong, H.; Padmanabhan, B.; Nickerson, J.V.: How does social media sentiment impact mass media sentiment? : a study of news in the financial markets (2021) 0.06
    0.056825537 = product of:
      0.113651074 = sum of:
        0.113651074 = product of:
          0.22730215 = sum of:
            0.22730215 = weight(_text_:news in 349) [ClassicSimilarity], result of:
              0.22730215 = score(doc=349,freq=12.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.85114413 = fieldWeight in 349, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.046875 = fieldNorm(doc=349)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Mass media sentiment of financial news significantly influences investment decisions of investors. Hence, studying how this sentiment emerges is important. In years past, this was straightforward, often dictated by journalists who cover financial news, but this has become more complex now. In this paper, we focus on how social media sentiment affects mass media sentiment. Using data from Sina Weibo and Sina Finance (around 60 million weibos and 6.2 million news articles), we show that social media does influence mass media sentiment emergence for financial news. The sentiment consistency between social media reaction and prior news articles amplifies the persistence of mass media sentiment over time. By contrast, we found limited evidence of social media reducing the persistence of mass media sentiment over time. The results have significant implications for understanding how 2 types of media, treated separately in the literature, may be connected.
  6. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.06
    0.055921838 = product of:
      0.111843675 = sum of:
        0.111843675 = sum of:
          0.07732976 = weight(_text_:news in 178) [ClassicSimilarity], result of:
            0.07732976 = score(doc=178,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
          0.034513917 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
            0.034513917 = score(doc=178,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  7. Rügenhagen, M.; Beck, T.S.; Sartorius, E.J.: Information integrity in the era of Fake News : ein neuer Studienschwerpunkt für wissenschaftliche Bibliotheken und Forschungseinrichtungen (2020) 0.04
    0.043744322 = product of:
      0.087488644 = sum of:
        0.087488644 = product of:
          0.17497729 = sum of:
            0.17497729 = weight(_text_:news in 5858) [ClassicSimilarity], result of:
              0.17497729 = score(doc=5858,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.6552111 = fieldWeight in 5858, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5858)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article we report on an experiment that tested how useful library-based guidelines are for measuring the integrity of information in the era of fake news. We found that the usefulness of these guidelines depends on at least three factors: weighting indicators (criteria), clear instructions, and context-specificity.
  8. Rügenhagen, M.; Beck, T.S.; Sartorius, E.J.: Information integrity in the era of Fake News : an experiment using library guidelines to judge information integrity (2020) 0.04
    0.043744322 = product of:
      0.087488644 = sum of:
        0.087488644 = product of:
          0.17497729 = sum of:
            0.17497729 = weight(_text_:news in 113) [ClassicSimilarity], result of:
              0.17497729 = score(doc=113,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.6552111 = fieldWeight in 113, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=113)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article we report on an experiment that tested how useful library-based guidelines are for measuring the integrity of information in the era of fake news. We found that the usefulness of these guidelines depends on at least three factors: weighting indicators (criteria), clear instructions, and context-specificity.
  9. Sinha, A.; Kedas, S.; Kumar, R.; Malo, P.: SEntFiN 1.0 : Entity-aware sentiment analysis for financial news (2022) 0.04
    0.043228656 = product of:
      0.08645731 = sum of:
        0.08645731 = product of:
          0.17291462 = sum of:
            0.17291462 = weight(_text_:news in 652) [ClassicSimilarity], result of:
              0.17291462 = score(doc=652,freq=10.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.64748734 = fieldWeight in 652, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=652)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Fine-grained financial sentiment analysis on news headlines is a challenging task requiring human-annotated datasets to achieve high performance. Limited studies have tried to address the sentiment extraction task in a setting where multiple entities are present in a news headline. In an effort to further research in this area, we make publicly available SEntFiN 1.0, a human-annotated dataset of 10,753 news headlines with entity-sentiment annotations, of which 2,847 headlines contain multiple entities, often with conflicting sentiments. We augment our dataset with a database of over 1,000 financial entities and their various representations in news media amounting to over 5,000 phrases. We propose a framework that enables the extraction of entity-relevant sentiments using a feature-based approach rather than an expression-based approach. For sentiment extraction, we utilize 12 different learning schemes utilizing lexicon-based and pretrained sentence representations and five classification approaches. Our experiments indicate that lexicon-based N-gram ensembles are above par with pretrained word embedding schemes such as GloVe. Overall, RoBERTa and finBERT (domain-specific BERT) achieve the highest average accuracy of 94.29% and F1-score of 93.27%. Further, using over 210,000 entity-sentiment predictions, we validate the economic effect of sentiments on aggregate market movements over a long duration.
  10. Gutierrez Lopez, M.; Makri, S.; MacFarlane, A.; Porlezza, C.; Cooper, G.; Missaoui, S.: Making newsworthy news : the integral role of creativity and verification in the human information behavior that drives news story creation (2022) 0.04
    0.043228656 = product of:
      0.08645731 = sum of:
        0.08645731 = product of:
          0.17291462 = sum of:
            0.17291462 = weight(_text_:news in 661) [ClassicSimilarity], result of:
              0.17291462 = score(doc=661,freq=10.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.64748734 = fieldWeight in 661, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=661)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Creativity and verification are intrinsic to high-quality journalism, but their role is often poorly visible in news story creation. Journalists face relentless commercial pressures that threaten to compromise story quality, in a digital era where their ethical obligation not to mislead the public has never been more important. It is therefore crucial to investigate how journalists can be supported to produce stories that are original, impactful, and factually accurate, under tight deadlines. We present findings from 14 semistructured interviews, where we asked journalists to discuss the creation of a recent news story to understand the process and associated human information behavior (HIB). Six overarching behaviors were identified: discovering, collecting, organizing, interrogating, contextualizing, and publishing. Creativity and verification were embedded throughout news story creation and integral to journalists' HIB, highlighting their ubiquity. They often manifested at a micro level; in small-scale but vital activities that drove and facilitated story creation. Their ubiquitous role highlights the importance of creativity and verification support being woven into functionality that facilitates information acquisition and use in digital information tools for journalists.
  11. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040459655 = product of:
      0.08091931 = sum of:
        0.08091931 = product of:
          0.24275793 = sum of:
            0.24275793 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24275793 = score(doc=862,freq=2.0), product of:
                0.43193975 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05094824 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  12. Fichman, P.; Rathi, M.: Trolling CNN and Fox News on Facebook, Instagram, and Twitter (2023) 0.04
    0.040181726 = product of:
      0.08036345 = sum of:
        0.08036345 = product of:
          0.1607269 = sum of:
            0.1607269 = weight(_text_:news in 958) [ClassicSimilarity], result of:
              0.1607269 = score(doc=958,freq=6.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.60184985 = fieldWeight in 958, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.046875 = fieldNorm(doc=958)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online trolling, disinformation, and deception are posing an existential threat to democracy. Informed by the online disinhibition theory and research on the ideological asymmetry between Democrats and Republicans, we examined how the extent and style of trolling varies across social media platforms, by analyzing comments on posts by two media channels (CNN and Fox News) on three social media platforms (Facebook, Instagram, and Twitter). We found differences in the style and extent of trolling across platforms and between media channels, with more trolling on articles posted by Fox News than by CNN, and a different trolling style on Twitter than Facebook or Instagram. Our study demonstrates a delicate balance between the socio-technical factors that are enabling and hindering trolling. While some platforms and government agencies believe in removing anonymity to regulate online harm, this paper makes a significant contribution against that view.
  13. Haggar, E.: Fighting fake news : exploring George Orwell's relationship to information literacy (2020) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 5978) [ClassicSimilarity], result of:
              0.15465952 = score(doc=5978,freq=8.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 5978, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5978)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to analyse George Orwell's diaries through an information literacy lens. Orwell is well known for his dedication to freedom of speech and objective truth, and his novel Nineteen Eighty-Four is often used as a lens through which to view the fake news phenomenon. This paper will examine Orwell's diaries in relation to UNESCO's Five Laws of Media and Information Literacy to examine how information literacy concepts can be traced in historical documents. Design/methodology/approach This paper will use a content analysis method to explore Orwell's relationship to information literacy. Two of Orwell's political diaries from the period 1940-42 were coded for key themes related to the ways in which Orwell discusses and evaluates information and news. These themes were then compared to UNESCO Five Laws of Media and Information Literacy. Textual analysis software NVivo 12 was used to perform keyword searches and word frequency queries in the digitised diaries. Findings The findings show that while Orwell's diaries and the Five Laws did not share terminology, they did share ideas on bias and access to information. They also extend the history of information literacy research and practice by illustrating how concerns about the need to evaluate information sources are represented within historical literature. Originality/value This paper combines historical research with textual analysis to bring a unique historical perspective to information literacy, demonstrating that "fake news" is not a recent phenomenon, and that the tools to fight it may also lie in historical research.
  14. Chawla, D.S.: Hundreds of 'predatory' journals indexed on leading scholarly database (2021) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 148) [ClassicSimilarity], result of:
              0.15465952 = score(doc=148,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 148, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=148)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    News
  15. Sirén-Heikel, S.; Kjellman, M.; Lindén, C.-G.: At the crossroads of logics : automating newswork with artificial intelligence-(Re)defining journalistic logics from the perspective of technologists (2023) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 903) [ClassicSimilarity], result of:
              0.15465952 = score(doc=903,freq=8.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 903, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=903)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As artificial intelligence (AI) technologies become more ubiquitous for streamlining and optimizing work, they are entering fields representing organizational logics at odds with the efficiency logic of automation. One such field is journalism, an industry defined by a logic enacted through professional norms, practices, and values. This paper examines the experience of technologists developing and employing natural language generation (NLG) in news organizations, looking at how they situate themselves and their technology in relation to newswork. Drawing on institutional logics, a theoretical framework from organizational theory, we show how technologists shape their logic for building these emerging technologies based on a theory of rationalizing news organizations, a frame of optimizing newswork, and a narrative of news organizations misinterpreting the technology. Our interviews reveal technologists mitigating tensions with journalistic logic and newswork by labeling stories generated by their systems as nonjournalistic content, seeing their technology as a solution for improving journalism, enabling newswork to move away from routine tasks. We also find that as technologists interact with news organizations, they assimilate elements from journalistic logic beneficial for benchmarking their technology for more lucrative industries.
  16. Giachanou, A.; Rosso, P.; Crestani, F.: ¬The impact of emotional signals on credibility assessment (2021) 0.03
    0.03348477 = product of:
      0.06696954 = sum of:
        0.06696954 = product of:
          0.13393909 = sum of:
            0.13393909 = weight(_text_:news in 328) [ClassicSimilarity], result of:
              0.13393909 = score(doc=328,freq=6.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.50154155 = fieldWeight in 328, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=328)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Fake news is considered one of the main threats of our society. The aim of fake news is usually to confuse readers and trigger intense emotions to them in an attempt to be spread through social networks. Even though recent studies have explored the effectiveness of different linguistic patterns for fake news detection, the role of emotional signals has not yet been explored. In this paper, we focus on extracting emotional signals from claims and evaluating their effectiveness on credibility assessment. First, we explore different methodologies for extracting the emotional signals that can be triggered to the users when they read a claim. Then, we present emoCred, a model that is based on a long-short term memory model that incorporates emotional signals extracted from the text of the claims to differentiate between credible and non-credible ones. In addition, we perform an analysis to understand which emotional signals and which terms are the most useful for the different credibility classes. We conduct extensive experiments and a thorough analysis on real-world datasets. Our results indicate the importance of incorporating emotional signals in the credibility assessment problem.
  17. Aral, S.: ¬The hype machine : how social media disrupts our elections, our economy, and our health - and how we must adapt (2020) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 550) [ClassicSimilarity], result of:
              0.12372762 = score(doc=550,freq=8.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 550, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.03125 = fieldNorm(doc=550)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Social media connected the world--and gave rise to fake news and increasing polarization. Now a leading researcher at MIT draws on 20 years of research to show how these trends threaten our political, economic, and emotional health in this eye-opening exploration of the dark side of technological progress. Today we have the ability, unprecedented in human history, to amplify our interactions with each other through social media. It is paramount, MIT social media expert Sinan Aral says, that we recognize the outsized impact social media has on our culture, our democracy, and our lives in order to steer today's social technology toward good, while avoiding the ways it can pull us apart. Otherwise, we could fall victim to what Aral calls "The Hype Machine." As a senior researcher of the longest-running study of fake news ever conducted, Aral found that lies spread online farther and faster than the truth--a harrowing conclusion that was featured on the cover of Science magazine. Among the questions Aral explores following twenty years of field research: Did Russian interference change the 2016 election? And how is it affecting the vote in 2020? Why does fake news travel faster than the truth online? How do social ratings and automated sharing determine which products succeed and fail? How does social media affect our kids? First, Aral links alarming data and statistics to three accelerating social media shifts: hyper-socialization, personalized mass persuasion, and the tyranny of trends. Next, he grapples with the consequences of the Hype Machine for elections, businesses, dating, and health. Finally, he maps out strategies for navigating the Hype Machine, offering his singular guidance for managing social media to fulfill its promise going forward. Rarely has a book so directly wrestled with the secret forces that drive the news cycle every day"
  18. Zheng, H.; Goh, D.H.-L.; Lee, E.W.J.; Lee, C.S.; Theng, Y.-L.: Understanding the effects of message cues on COVID-19 information sharing on Twitter (2022) 0.03
    0.0273402 = product of:
      0.0546804 = sum of:
        0.0546804 = product of:
          0.1093608 = sum of:
            0.1093608 = weight(_text_:news in 564) [ClassicSimilarity], result of:
              0.1093608 = score(doc=564,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40950692 = fieldWeight in 564, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Analyzing and documenting human information behaviors in the context of global public health crises such as the COVID-19 pandemic are critical to informing crisis management. Drawing on the Elaboration Likelihood Model, this study investigates how three types of peripheral cues-content richness, emotional valence, and communication topic-are associated with COVID-19 information sharing on Twitter. We used computational methods, combining Latent Dirichlet Allocation topic modeling with psycholinguistic indicators obtained from the Linguistic Inquiry and Word Count dictionary to measure these concepts and built a research model to assess their effects on information sharing. Results showed that content richness was negatively associated with information sharing. Tweets with negative emotions received more user engagement, whereas tweets with positive emotions were less likely to be disseminated. Further, tweets mentioning advisories tended to receive more retweets than those mentioning support and news updates. More importantly, emotional valence moderated the relationship between communication topics and information sharing-tweets discussing news updates and support conveying positive sentiments led to more information sharing; tweets mentioning the impact of COVID-19 with negative emotions triggered more sharing. Finally, theoretical and practical implications of this study are discussed in the context of global public health communication.
  19. Rubel, A.; Castro, C.; Pham, A.: Algorithms and autonomy : the ethics of automated decision systems (2021) 0.03
    0.0273402 = product of:
      0.0546804 = sum of:
        0.0546804 = product of:
          0.1093608 = sum of:
            0.1093608 = weight(_text_:news in 671) [ClassicSimilarity], result of:
              0.1093608 = score(doc=671,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40950692 = fieldWeight in 671, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=671)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work... the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core
  20. Juneström, A.: Discourses of fact-checking in Swedish news media (2022) 0.03
    0.0273402 = product of:
      0.0546804 = sum of:
        0.0546804 = product of:
          0.1093608 = sum of:
            0.1093608 = weight(_text_:news in 686) [ClassicSimilarity], result of:
              0.1093608 = score(doc=686,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40950692 = fieldWeight in 686, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to examine how contemporary fact-checking is discursively constructed in Swedish news media; this serves to gain insight into how this practice is understood in society. Design/methodology/approach A selection of texts on the topic of fact-checking published by two of Sweden's largest morning newspapers is analyzed through the lens of Fairclough's discourse theoretical framework. Findings Three key discourses of fact-checking were identified, each of which included multiple sub-discourses. First, a discourse that has been labeled as "the affirmative discourse," representing fact-checking as something positive, was identified. This discourse embraces ideas about fact-checking as something that, for example, strengthens democracy. Second, a contrasting discourse that has been labeled "the adverse discourse" was identified. This discourse represents fact-checking as something precarious that, for example, poses a risk to democracy. Third, a discourse labeled "the agency discourse" was identified. This discourse conveys ideas on whose responsibility it is to conduct fact-checking. Originality/value A better understanding of the discursive construction of fact-checking provides insights into social practices pertaining to it and the expectations of its role in contemporary society. The results are relevant for journalists and professionals who engage in fact-checking and for others who have a particular interest in fact-checking, e.g. librarians and educators engaged in media and information literacy projects.

Types

  • a 97
  • el 5
  • m 4
  • p 2
  • More… Less…