Search (769 results, page 1 of 39)

  • × year_i:[2010 TO 2020}
  1. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.13
    0.1261398 = product of:
      0.2522796 = sum of:
        0.2522796 = sum of:
          0.15465952 = weight(_text_:news in 3582) [ClassicSimilarity], result of:
            0.15465952 = score(doc=3582,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.57913023 = fieldWeight in 3582, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.078125 = fieldNorm(doc=3582)
          0.09762009 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
            0.09762009 = score(doc=3582,freq=4.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.54716086 = fieldWeight in 3582, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3582)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
    Source
    http://www.spektrum.de/news/mathematischer-beweis-ueber-mehrdimensionale-normalverteilungen-gefunden/1450623
  2. Häring, N.; Hensinger, P.: "Digitale Bildung" : Der abschüssige Weg zur Konditionierungsanstalt (2019) 0.11
    0.111843675 = product of:
      0.22368735 = sum of:
        0.22368735 = sum of:
          0.15465952 = weight(_text_:news in 4999) [ClassicSimilarity], result of:
            0.15465952 = score(doc=4999,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.57913023 = fieldWeight in 4999, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.078125 = fieldNorm(doc=4999)
          0.06902783 = weight(_text_:22 in 4999) [ClassicSimilarity], result of:
            0.06902783 = score(doc=4999,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.38690117 = fieldWeight in 4999, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4999)
      0.5 = coord(1/2)
    
    Date
    22. 2.2019 11:45:19
    Source
    http://norberthaering.de/de/27-german/news/1100-digitale-bildung
  3. Andrade, T.C.; Dodebei, V.: Traces of digitized newspapers and bom-digital news sites : a trail to the memory on the internet (2016) 0.09
    0.089474946 = product of:
      0.17894989 = sum of:
        0.17894989 = sum of:
          0.12372762 = weight(_text_:news in 4901) [ClassicSimilarity], result of:
            0.12372762 = score(doc=4901,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.4633042 = fieldWeight in 4901, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0625 = fieldNorm(doc=4901)
          0.055222265 = weight(_text_:22 in 4901) [ClassicSimilarity], result of:
            0.055222265 = score(doc=4901,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.30952093 = fieldWeight in 4901, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4901)
      0.5 = coord(1/2)
    
    Date
    19. 1.2019 17:42:22
  4. epd: Kaiserslauterer Forscher untersuchen Google-Suche (2017) 0.08
    0.084226504 = product of:
      0.16845301 = sum of:
        0.16845301 = sum of:
          0.13393909 = weight(_text_:news in 3815) [ClassicSimilarity], result of:
            0.13393909 = score(doc=3815,freq=6.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.50154155 = fieldWeight in 3815, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3815)
          0.034513917 = weight(_text_:22 in 3815) [ClassicSimilarity], result of:
            0.034513917 = score(doc=3815,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 3815, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3815)
      0.5 = coord(1/2)
    
    Content
    "Bei der Suche nach Politikern und Parteien über Suchmaschinen wie Google spielt Personalisierung einem Forschungsprojekt zufolge eine geringere Rolle als bisher angenommen. Bei der Eingabe von Politikernamen erhalten verschiedene Nutzer größtenteils die gleichen Ergebnisse angezeigt, lautet ein gestern veröffentlichtes Zwischenergebnis einer Analyse im Auftrag der Landesmedienanstalten. Die Ergebnisse stammen aus dem Forschungsprojekt "#Datenspende: Google und die Bundestagswahl2017" der Initiative AIgorithmWatch und der Technischen Universität Kaiserslautern. Im Durchschnitt erhalten zwei unterschiedliche Nutzer demnach bei insgesamt neun Suchergebnissen sieben bis acht identische Treffer, wenn sie mit Google nach Spitzenkandidaten der Parteien im Bundestagswahlkampf suchen. Die Suchergebnisse zu Parteien unterscheiden sich allerdings stärker. Bei neun Suchanfragen gebe es hier nur fünf bis sechs gemeinsame Suchergebnisse, fanden die Wissenschaftler heraus. Die Informatikprofessorin Katharina Zweig von der TU Kaiserslautern zeigte sich überrascht, dass die Suchergebisse verschiedener Nutzer sich so wenig unterscheiden. "Das könnte allerdings morgen schon wieder anders aussehen", warnte sie, Die Studie beweise erstmals, dass es grundsätzlich möglich sei, Algorithmen von Intermediären wie Suchmaschinen im Verdachtsfall nachvollziehbar zu machen. Den Ergebnissen zufolge gibt es immer wieder kleine Nutzergruppen mit stark abweichenden Ergebnislisten. Eine abschließende, inhaltliche Bewertung stehe noch aus. Für das Projekt haben nach Angaben der Medienanstalt bisher fast 4000 freiwillige Nutzer ein von den Forschern programmiertes Plug-ln auf ihrem Computer- installiert. Bisher seien damitdrei Millionen gespendete Datensätze gespeichert worden. Das Projekt wird finanziert von den Landesmedienanstalten Bayern, Berlin-Brandenburg, Hessen, Rheinland-Pfalz, Saarland und Sachsen." Vgl. auch: https://www.swr.de/swraktuell/rp/kaiserslautern/forschung-in-kaiserslautern-beeinflusst-google-die-bundestagswahl/-/id=1632/did=20110680/nid=1632/1mohmie/index.html. https://www.uni-kl.de/aktuelles/news/news/detail/News/aufruf-zur-datenspende-welche-nachrichten-zeigt-die-suchmaschine-google-zur-bundestagswahl-an/.
    Date
    22. 7.2004 9:42:33
  5. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.08091931 = product of:
      0.16183862 = sum of:
        0.16183862 = product of:
          0.48551586 = sum of:
            0.48551586 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.48551586 = score(doc=973,freq=2.0), product of:
                0.43193975 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05094824 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  6. Hills, T.; Segev, E.: ¬The news is American but our memories are - Chinese? (2014) 0.07
    0.07233536 = product of:
      0.14467072 = sum of:
        0.14467072 = product of:
          0.28934145 = sum of:
            0.28934145 = weight(_text_:news in 1342) [ClassicSimilarity], result of:
              0.28934145 = score(doc=1342,freq=28.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0834534 = fieldWeight in 1342, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1342)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Are our memories of the world well described by the international news coverage in our country? If so, sources central to international news may also be central to international recall patterns; in particular, they may reflect an American-centric focus, given the previously proposed central U.S. position in the news marketplace. We asked people of four different nationalities (China, Israel, Switzerland, and the United States) to list all the countries they could name. We also constructed a network representation of the world for each nation based on the co-occurrence pattern of countries in the news. To compare news and memories, we developed a computational model that predicts the recall order of countries based on the news networks. Consistent with previous reports, the U.S. news was central to the news networks overall. However, although national recall patterns reflected their corresponding national news sources, the Chinese news was substantially better than other national news sources at predicting both individual and aggregate memories across nations. Our results suggest that news and memories are related but may also reflect biases in the way information is transferred to long-term memory, potentially biased against the transient coverage of more "free" presses. We discuss possible explanations for this "Chinese news effect" in relation to prominent cognitive and communications theories.
  7. Costas, R.; Zahedi, Z.; Wouters, P.: ¬The thematic orientation of publications mentioned on social media : large-scale disciplinary comparison of social media metrics with citations (2015) 0.07
    0.07193736 = product of:
      0.14387472 = sum of:
        0.14387472 = sum of:
          0.1093608 = weight(_text_:news in 2598) [ClassicSimilarity], result of:
            0.1093608 = score(doc=2598,freq=4.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.40950692 = fieldWeight in 2598, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
          0.034513917 = weight(_text_:22 in 2598) [ClassicSimilarity], result of:
            0.034513917 = score(doc=2598,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 2598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to analyze the disciplinary orientation of scientific publications that were mentioned on different social media platforms, focussing on their differences and similarities with citation counts. Design/methodology/approach - Social media metrics and readership counts, associated with 500,216 publications and their citation data from the Web of Science database, were collected from Altmetric.com and Mendeley. Results are presented through descriptive statistical analyses together with science maps generated with VOSviewer. Findings - The results confirm Mendeley as the most prevalent social media source with similar characteristics to citations in their distribution across fields and their density in average values per publication. The humanities, natural sciences, and engineering disciplines have a much lower presence of social media metrics. Twitter has a stronger focus on general medicine and social sciences. Other sources (blog, Facebook, Google+, and news media mentions) are more prominent in regards to multidisciplinary journals. Originality/value - This paper reinforces the relevance of Mendeley as a social media source for analytical purposes from a disciplinary perspective, being particularly relevant for the social sciences (together with Twitter). Key implications for the use of social media metrics on the evaluation of research performance (e.g. the concentration of some social media metrics, such as blogs, news items, etc., around multidisciplinary journals) are identified.
    Date
    20. 1.2015 18:30:22
  8. Arapakis, I.; Cambazoglu, B.B.; Lalmas, M.: On the feasibility of predicting popular news at cold start (2017) 0.07
    0.06970411 = product of:
      0.13940822 = sum of:
        0.13940822 = product of:
          0.27881643 = sum of:
            0.27881643 = weight(_text_:news in 3595) [ClassicSimilarity], result of:
              0.27881643 = score(doc=3595,freq=26.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0440419 = fieldWeight in 3595, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Prominent news sites on the web provide hundreds of news articles daily. The abundance of news content competing to attract online attention, coupled with the manual effort involved in article selection, necessitates the timely prediction of future popularity of these news articles. The future popularity of a news article can be estimated using signals indicating the article's penetration in social media (e.g., number of tweets) in addition to traditional web analytics (e.g., number of page views). In practice, it is important to make such estimations as early as possible, preferably before the article is made available on the news site (i.e., at cold start). In this paper we perform a study on cold-start news popularity prediction using a collection of 13,319 news articles obtained from Yahoo News, a major news provider. We characterize the popularity of news articles through a set of online metrics and try to predict their values across time using machine learning techniques on a large collection of features obtained from various sources. Our findings indicate that predicting news popularity at cold start is a difficult task, contrary to the findings of a prior work on the same topic. Most articles' popularity may not be accurately anticipated solely on the basis of content features, without having the early-stage popularity values.
  9. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06743276 = product of:
      0.13486552 = sum of:
        0.13486552 = product of:
          0.40459657 = sum of:
            0.40459657 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40459657 = score(doc=1826,freq=2.0), product of:
                0.43193975 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05094824 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  10. Sela, M.; Lavie, T.; Inbar, O.; Oppenheim, I.; Meyer, J.: Personalizing news content : an experimental study (2015) 0.07
    0.06696954 = product of:
      0.13393909 = sum of:
        0.13393909 = product of:
          0.26787817 = sum of:
            0.26787817 = weight(_text_:news in 1604) [ClassicSimilarity], result of:
              0.26787817 = score(doc=1604,freq=24.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0030831 = fieldWeight in 1604, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1604)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The delivery of personalized news content depends on the ability to predict user interests. We evaluated different methods for acquiring user profiles based on declared and actual interest in various news topics and items. In an experiment, 36 students rated their interest in six news topics and in specific news items and read on 6 days standard, nonpersonalized editions and personalized (basic or adaptive) news editions. We measured subjective satisfaction with the editions and expressed preferences, along with objective measures, to infer actual interest in items. Users' declared interest in news topics did not strongly predict their actual interest in specific news items. Satisfaction with all news editions was high, but participants preferred the personalized editions. User interest was weakly correlated with reading duration, article length, and reading order. Different measures predicted interest in different news topics. Explicit measures predicted interest in relatively clearly defined topics such as sports, but were less appropriate for broader topics such as science and technology. Our results indicate that explicit and implicit methods should be combined to generate user profiles. We suggest that a personalized newspaper should contain both general information and personalized items, selected based on specific combinations of measures for each of the different news topics. Based on the findings, we present a general model to decide on the personalization of news content to generate personalized editions for readers.
  11. Überarbeitete KAB als Wiki : Version 2017 - jetzt online (2017) 0.07
    0.06696954 = product of:
      0.13393909 = sum of:
        0.13393909 = product of:
          0.26787817 = sum of:
            0.26787817 = weight(_text_:news in 3578) [ClassicSimilarity], result of:
              0.26787817 = score(doc=3578,freq=6.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0030831 = fieldWeight in 3578, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    ekz-Dozenten-News
    Source
    http://www.ekz.de/unternehmen/aktuelles/news/news-artikel/ueberarbeitete-kab-als-wiki-version-2017-jetzt-online/?tx_news_pi1[day]=11&tx_news_pi1[month]=1&tx_news_pi1[year]=2017&cHash=d6c2ad802fd0a42d1e2c1654f51f29d8
  12. Hajibayova, L.; Jacob, E.K.: Investigation of levels of abstraction in user-generated tagging vocabularies : a case of wild or tamed categorization? (2014) 0.06
    0.0630699 = product of:
      0.1261398 = sum of:
        0.1261398 = sum of:
          0.07732976 = weight(_text_:news in 1451) [ClassicSimilarity], result of:
            0.07732976 = score(doc=1451,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 1451, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1451)
          0.048810046 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
            0.048810046 = score(doc=1451,freq=4.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.27358043 = fieldWeight in 1451, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1451)
      0.5 = coord(1/2)
    
    Abstract
    Previous studies of user-generated vocabularies (e.g., Golder & Huberman, 2006; Munk & Mork, 2007b; Yoon, 2009) have proposed that a primary source of tag agreement across users is due to wide-spread use of tags at the basic level of abstraction. However, an investigation of levels of abstraction in user-generated tagging vocabularies did not support this notion. This study analyzed approximately 8000 tags generated by 40 subjects. Analysis of 7617 tags assigned to 36 online resources representing four content categories (TOOL, FRUIT, CLOTHING, VEHICLE) and three resource genres (news article, blog, ecommerce) did not find statistically significant preferences in the assignment of tags at the superordinate, subordinate or basic levels of abstraction. Within the framework of Heidegger's (1953/1996) notion of handiness , observed variations in the preferred level of abstraction are both natural and phenomenological in that perception and understanding -- and thus the meaning of "things" -- arise out of the individual's contextualized experiences of engaging with objects. Operationalization of superordinate, subordinate and basic levels of abstraction using Heidegger's notion of handiness may be able to account for differences in the everyday experiences and activities of taggers, thereby leading to a better understanding of user-generated tagging vocabularies.
    Date
    5. 9.2014 16:22:27
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  13. Dupont, J.: Falsch! (2017) 0.06
    0.06186381 = product of:
      0.12372762 = sum of:
        0.12372762 = product of:
          0.24745524 = sum of:
            0.24745524 = weight(_text_:news in 3470) [ClassicSimilarity], result of:
              0.24745524 = score(doc=3470,freq=8.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.9266084 = fieldWeight in 3470, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3470)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Mit erfundenen Meldungen lässt sich im Internet viel Geld und politisch Stimmung machen. Wie kann man sie erkennen? Ein Beitrag zum Thema Fake News. Fazit: "Ein Rezept, mit dem man Fake News erkennt, gibt es noch nicht. Bader und Rinsdorf sind aber auf der Suche danach. Im Moment analysieren sie Tausende Fake News auf auffällige Textmerkmale. Anschließend sollen Algorithmen programmiert werden, die in Sekundenschnelle erkennen, mit welcher Wahrscheinlichkeit es sich um eine gefälschte Nachricht handelt. Bader und Rinsdorf: "Wer die Nachrichtenvielfalt nutzt, sich breit informiert und Informationen im Netz mit einer gewissen Grundskepsis begegnet, kann Fake News schneller durchschauen.""
  14. Affelt, A.: All that's not fit to print : fake news and the call to action for librarians and information professionals (2019) 0.06
    0.059899382 = product of:
      0.119798765 = sum of:
        0.119798765 = product of:
          0.23959753 = sum of:
            0.23959753 = weight(_text_:news in 102) [ClassicSimilarity], result of:
              0.23959753 = score(doc=102,freq=30.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.8971848 = fieldWeight in 102, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.03125 = fieldNorm(doc=102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "Dewey Defeats Truman." "Hillary Clinton Adopts Alien Baby." Fake news may have reached new notoriety since the 2016 US election, but it has been around a long time. Whether it was an error in judgment in a rush to publish election results in November, 1948, or a tabloid cover designed to incite an eye roll and a chuckle in June, 1993, fake news has permeated and influenced culture since the inception of the printed press. But now, when almost every press conference at the White House contains a declaration of the evils of "fake news", evaluating information integrity and quality is more important than ever. In All That?s Not Fit to Print, Amy Affelt offers tools and techniques for spotting fake news and discusses best practices for finding high quality sources, information, and data. Including an analysis of the relationship between fake news and social media, and potential remedies for viral fake news, Affelt explores the future of the press and the skills that librarians will need, not only to navigate these murky waters, but also to lead information consumers in to that future. For any librarian or information professional, or anyone who has ever felt overwhelmed by the struggle of determining the true from the false, this book is a fundamental guide to facing the tides of fake news.
    Content
    1. Fake News: False Content in a Familiar Format; 2. How We Got Here; 3. When Sharing Is Not Caring: Fake News and Social Media; 4. How to Spot Fake News; 5. Fake News in the Field: Library Schools and Libraries; Ottawa Public Library; Vancouver Public Library; Surrey Public Library; Mississauga Public Library; Oshawa Public Library Librarian; 6. The Future of Fake News: The View from HereThe Eyes Have It; Put Your Money Where the Mouth Is; Hot Blooded? Check It and See; Go Slow-Mo; Remember the Old Standbys; Conclusion.
    LCSH
    Fake news
    Subject
    Fake news
  15. Arapakis, I.; Lalmas, M.; Cambazoglu, B.B.; MarcosM.-C.; Jose, J.M.: User engagement in online news : under the scope of sentiment, interest, affect, and gaze (2014) 0.06
    0.057997324 = product of:
      0.11599465 = sum of:
        0.11599465 = product of:
          0.2319893 = sum of:
            0.2319893 = weight(_text_:news in 1497) [ClassicSimilarity], result of:
              0.2319893 = score(doc=1497,freq=18.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.8686954 = fieldWeight in 1497, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1497)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online content providers, such as news portals and social media platforms, constantly seek new ways to attract large shares of online attention by keeping their users engaged. A common challenge is to identify which aspects of online interaction influence user engagement the most. In this article, through an analysis of a news article collection obtained from Yahoo News US, we demonstrate that news articles exhibit considerable variation in terms of the sentimentality and polarity of their content, depending on factors such as news provider and genre. Moreover, through a laboratory study, we observe the effect of sentimentality and polarity of news and comments on a set of subjective and objective measures of engagement. In particular, we show that attention, affect, and gaze differ across news of varying interestingness. As part of our study, we also explore methods that exploit the sentiments expressed in user comments to reorder the lists of comments displayed in news pages. Our results indicate that user engagement can be anticipated predicted if we account for the sentimentality and polarity of the content as well as other factors that drive attention and inspire human curiosity.
  16. Aranyi, G.; Schaik, P. van: Testing a model of user-experience with news websites : how research questions evolve (2016) 0.06
    0.057997324 = product of:
      0.11599465 = sum of:
        0.11599465 = product of:
          0.2319893 = sum of:
            0.2319893 = weight(_text_:news in 3009) [ClassicSimilarity], result of:
              0.2319893 = score(doc=3009,freq=18.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.8686954 = fieldWeight in 3009, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3009)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although the Internet has become a major source for accessing news, there is little research regarding users' experience with news sites. We conducted an experiment to test a comprehensive model of user experience with news sites that was developed previously by means of an online survey. Level of adoption (novel or adopted site) was controlled with a between-subjects manipulation. We collected participants' answers to psychometric scales at 2 times: after presentation of 5 screenshots of a news site and directly after 10 minutes of hands-on experience with the site. The model was extended with the prediction of users' satisfaction with news sites as a high-level design goal. A psychometric measure of trust in news providers was developed and added to the model to better predict people's intention to use particular news sites. The model presented in this article represents a theoretically founded, empirically tested basis for evaluating news websites, and it holds theoretical relevance to user-experience research in general. Finally, the findings and the model are applied to provide practical guidance in design prioritization.
  17. Bünte, O.: Bundesdatenschutzbeauftragte bezweifelt Facebooks Datenschutzversprechen (2018) 0.06
    0.055921838 = product of:
      0.111843675 = sum of:
        0.111843675 = sum of:
          0.07732976 = weight(_text_:news in 4180) [ClassicSimilarity], result of:
            0.07732976 = score(doc=4180,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4180)
          0.034513917 = weight(_text_:22 in 4180) [ClassicSimilarity], result of:
            0.034513917 = score(doc=4180,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4180)
      0.5 = coord(1/2)
    
    Date
    23. 3.2018 13:41:22
    Footnote
    Vgl. zum Hintergrund auch: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election; https://www.nytimes.com/2018/03/18/us/cambridge-analytica-facebook-privacy-data.html; http://www.latimes.com/business/la-fi-tn-facebook-cambridge-analytica-sued-20180321-story.html; https://www.tagesschau.de/wirtschaft/facebook-cambridge-analytica-103.html; http://www.spiegel.de/netzwelt/web/cambridge-analytica-der-eigentliche-skandal-liegt-im-system-facebook-kolumne-a-1199122.html; http://www.spiegel.de/netzwelt/netzpolitik/cambridge-analytica-facebook-sieht-sich-im-datenskandal-als-opfer-a-1199095.html; https://www.heise.de/newsticker/meldung/Datenskandal-um-Cambridge-Analytica-Facebook-sieht-sich-als-Opfer-3999922.html.
  18. Taglinger, H.: Ausgevogelt, jetzt wird es ernst (2018) 0.06
    0.055921838 = product of:
      0.111843675 = sum of:
        0.111843675 = sum of:
          0.07732976 = weight(_text_:news in 4281) [ClassicSimilarity], result of:
            0.07732976 = score(doc=4281,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 4281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4281)
          0.034513917 = weight(_text_:22 in 4281) [ClassicSimilarity], result of:
            0.034513917 = score(doc=4281,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 4281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4281)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:38:55
    Source
    https://www.heise.de/tp/news/Ausgevogelt-jetzt-wird-es-ernst-3934458.html?view=print
  19. Wehling, E.: Framing-Manual : Unser gemeinsamer freier Rundfunk ARD (2019) 0.06
    0.055921838 = product of:
      0.111843675 = sum of:
        0.111843675 = sum of:
          0.07732976 = weight(_text_:news in 4997) [ClassicSimilarity], result of:
            0.07732976 = score(doc=4997,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 4997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4997)
          0.034513917 = weight(_text_:22 in 4997) [ClassicSimilarity], result of:
            0.034513917 = score(doc=4997,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 4997, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4997)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch den Kommentar in Open Password , Nr.157 vom 22.02.2019: "Vor wenigen Tagen wurde das "Framing-Manual" der Linguistin Elisabeth Wehling bekannt, das sie für 120.000 Euro für die ARD erarbeitet hat und fortlaufend in Workshops für Mitarbeiter eingesetzt wird. Da aus Frau Wehlings Sicht "objektives, faktenbegründetes und rationales Denken" nicht möglich ist, kann es im öffentlichen Streit der Meinungen nur mehr darum gehen, die eigenen Sprachregelungen durchzusetzen und die Rezipienten über Emotionalisierungen und implizite moralische Bewertungen über den Tisch zu ziehen. So sollten Bürger, die den Zwangsbeitrag für Rundfunk und Fernsehen nicht zahlen wollen, als "demokratieferne, wortbrüchige und illoyale Beitragshinterzieher" gebrandmarkt werden und sind "unserem gemeinsamen freien Rundfunk ARD" die "medienkapitalistischen Heuschrecken" entgegenzustellen. Derweil berichtet Meedia von einem zweiten Fall Relotius, einem freien Journalisten, der für die Süddeutsche Zeitung, die Zeit und den Spiegel geschrieben hat und soeben mit einer frei erfundenen Geschichte von der SZ überführt wurde. Das "Framing-Manual" erscheint mir als ein noch größerer Skandal als der Fall Claas Relotius mit seinen vielen erfundenen Geschichten im Spiegel und praktisch der gesamten weiteren "Qualitätspresse", weil die Spitzen der ARD und mit ihnen die nachgeordneten Strukturen das Manual von Frau Wehling willkommen geheißen haben und weiterhin zur Indoktrination ihrer Mitarbeiter verwenden. Fake News und die damit verbundene Kriegserklärung an Wahrheit und Wissenschaft kommen nicht von Donald Trump, sie werden selbst generiert und breiten sich ungehindert mit Wissen und Willen der Verantwortlichen in unserem "Qualitätsfernsehen" aus." (W. Bredemeier)
    Date
    22. 2.2019 9:26:20
  20. Zhao, X.; Jin, P.; Yue, L.: Discovering topic time from web news (2015) 0.05
    0.0546804 = product of:
      0.1093608 = sum of:
        0.1093608 = product of:
          0.2187216 = sum of:
            0.2187216 = weight(_text_:news in 2673) [ClassicSimilarity], result of:
              0.2187216 = score(doc=2673,freq=16.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.81901383 = fieldWeight in 2673, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic time reflects the temporal feature of topics in Web news pages, which can be used to establish and analyze topic models for many time-sensitive text mining tasks. However, there are two critical challenges in discovering topic time from Web news pages. The first issue is how to normalize different kinds of temporal expressions within a Web news page, e.g., explicit and implicit temporal expressions, into a unified representation framework. The second issue is how to determine the right topic time for topics in Web news. Aiming at solving these two problems, we propose a systematic framework for discovering topic time from Web news. In particular, for the first issue, we propose a new approach that can effectively determine the appropriate referential time for implicit temporal expressions and further present an effective defuzzification algorithm to find the right explanation for a fuzzy temporal expression. For the second issue, we propose a relation model to describe the relationship between news topics and topic time. Based on this model, we design a new algorithm to extract topic time from Web news. We build a prototype system called Topic Time Parser (TTP) and conduct extensive experiments to measure the effectiveness of our proposal. The results suggest that our proposal is effective in both temporal expression normalization and topic time extraction.

Authors

Languages

  • e 550
  • d 211
  • a 1
  • hu 1
  • More… Less…

Types

  • a 659
  • el 85
  • m 56
  • s 18
  • x 13
  • r 7
  • b 5
  • i 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications