Search (659 results, page 1 of 33)

  • × year_i:[2010 TO 2020}
  • × type_ss:"a"
  1. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.13
    0.1261398 = product of:
      0.2522796 = sum of:
        0.2522796 = sum of:
          0.15465952 = weight(_text_:news in 3582) [ClassicSimilarity], result of:
            0.15465952 = score(doc=3582,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.57913023 = fieldWeight in 3582, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.078125 = fieldNorm(doc=3582)
          0.09762009 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
            0.09762009 = score(doc=3582,freq=4.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.54716086 = fieldWeight in 3582, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=3582)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
    Source
    http://www.spektrum.de/news/mathematischer-beweis-ueber-mehrdimensionale-normalverteilungen-gefunden/1450623
  2. Andrade, T.C.; Dodebei, V.: Traces of digitized newspapers and bom-digital news sites : a trail to the memory on the internet (2016) 0.09
    0.089474946 = product of:
      0.17894989 = sum of:
        0.17894989 = sum of:
          0.12372762 = weight(_text_:news in 4901) [ClassicSimilarity], result of:
            0.12372762 = score(doc=4901,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.4633042 = fieldWeight in 4901, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0625 = fieldNorm(doc=4901)
          0.055222265 = weight(_text_:22 in 4901) [ClassicSimilarity], result of:
            0.055222265 = score(doc=4901,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.30952093 = fieldWeight in 4901, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4901)
      0.5 = coord(1/2)
    
    Date
    19. 1.2019 17:42:22
  3. epd: Kaiserslauterer Forscher untersuchen Google-Suche (2017) 0.08
    0.084226504 = product of:
      0.16845301 = sum of:
        0.16845301 = sum of:
          0.13393909 = weight(_text_:news in 3815) [ClassicSimilarity], result of:
            0.13393909 = score(doc=3815,freq=6.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.50154155 = fieldWeight in 3815, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3815)
          0.034513917 = weight(_text_:22 in 3815) [ClassicSimilarity], result of:
            0.034513917 = score(doc=3815,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 3815, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3815)
      0.5 = coord(1/2)
    
    Content
    "Bei der Suche nach Politikern und Parteien über Suchmaschinen wie Google spielt Personalisierung einem Forschungsprojekt zufolge eine geringere Rolle als bisher angenommen. Bei der Eingabe von Politikernamen erhalten verschiedene Nutzer größtenteils die gleichen Ergebnisse angezeigt, lautet ein gestern veröffentlichtes Zwischenergebnis einer Analyse im Auftrag der Landesmedienanstalten. Die Ergebnisse stammen aus dem Forschungsprojekt "#Datenspende: Google und die Bundestagswahl2017" der Initiative AIgorithmWatch und der Technischen Universität Kaiserslautern. Im Durchschnitt erhalten zwei unterschiedliche Nutzer demnach bei insgesamt neun Suchergebnissen sieben bis acht identische Treffer, wenn sie mit Google nach Spitzenkandidaten der Parteien im Bundestagswahlkampf suchen. Die Suchergebnisse zu Parteien unterscheiden sich allerdings stärker. Bei neun Suchanfragen gebe es hier nur fünf bis sechs gemeinsame Suchergebnisse, fanden die Wissenschaftler heraus. Die Informatikprofessorin Katharina Zweig von der TU Kaiserslautern zeigte sich überrascht, dass die Suchergebisse verschiedener Nutzer sich so wenig unterscheiden. "Das könnte allerdings morgen schon wieder anders aussehen", warnte sie, Die Studie beweise erstmals, dass es grundsätzlich möglich sei, Algorithmen von Intermediären wie Suchmaschinen im Verdachtsfall nachvollziehbar zu machen. Den Ergebnissen zufolge gibt es immer wieder kleine Nutzergruppen mit stark abweichenden Ergebnislisten. Eine abschließende, inhaltliche Bewertung stehe noch aus. Für das Projekt haben nach Angaben der Medienanstalt bisher fast 4000 freiwillige Nutzer ein von den Forschern programmiertes Plug-ln auf ihrem Computer- installiert. Bisher seien damitdrei Millionen gespendete Datensätze gespeichert worden. Das Projekt wird finanziert von den Landesmedienanstalten Bayern, Berlin-Brandenburg, Hessen, Rheinland-Pfalz, Saarland und Sachsen." Vgl. auch: https://www.swr.de/swraktuell/rp/kaiserslautern/forschung-in-kaiserslautern-beeinflusst-google-die-bundestagswahl/-/id=1632/did=20110680/nid=1632/1mohmie/index.html. https://www.uni-kl.de/aktuelles/news/news/detail/News/aufruf-zur-datenspende-welche-nachrichten-zeigt-die-suchmaschine-google-zur-bundestagswahl-an/.
    Date
    22. 7.2004 9:42:33
  4. Hills, T.; Segev, E.: ¬The news is American but our memories are - Chinese? (2014) 0.07
    0.07233536 = product of:
      0.14467072 = sum of:
        0.14467072 = product of:
          0.28934145 = sum of:
            0.28934145 = weight(_text_:news in 1342) [ClassicSimilarity], result of:
              0.28934145 = score(doc=1342,freq=28.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0834534 = fieldWeight in 1342, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1342)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Are our memories of the world well described by the international news coverage in our country? If so, sources central to international news may also be central to international recall patterns; in particular, they may reflect an American-centric focus, given the previously proposed central U.S. position in the news marketplace. We asked people of four different nationalities (China, Israel, Switzerland, and the United States) to list all the countries they could name. We also constructed a network representation of the world for each nation based on the co-occurrence pattern of countries in the news. To compare news and memories, we developed a computational model that predicts the recall order of countries based on the news networks. Consistent with previous reports, the U.S. news was central to the news networks overall. However, although national recall patterns reflected their corresponding national news sources, the Chinese news was substantially better than other national news sources at predicting both individual and aggregate memories across nations. Our results suggest that news and memories are related but may also reflect biases in the way information is transferred to long-term memory, potentially biased against the transient coverage of more "free" presses. We discuss possible explanations for this "Chinese news effect" in relation to prominent cognitive and communications theories.
  5. Costas, R.; Zahedi, Z.; Wouters, P.: ¬The thematic orientation of publications mentioned on social media : large-scale disciplinary comparison of social media metrics with citations (2015) 0.07
    0.07193736 = product of:
      0.14387472 = sum of:
        0.14387472 = sum of:
          0.1093608 = weight(_text_:news in 2598) [ClassicSimilarity], result of:
            0.1093608 = score(doc=2598,freq=4.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.40950692 = fieldWeight in 2598, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
          0.034513917 = weight(_text_:22 in 2598) [ClassicSimilarity], result of:
            0.034513917 = score(doc=2598,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 2598, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2598)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to analyze the disciplinary orientation of scientific publications that were mentioned on different social media platforms, focussing on their differences and similarities with citation counts. Design/methodology/approach - Social media metrics and readership counts, associated with 500,216 publications and their citation data from the Web of Science database, were collected from Altmetric.com and Mendeley. Results are presented through descriptive statistical analyses together with science maps generated with VOSviewer. Findings - The results confirm Mendeley as the most prevalent social media source with similar characteristics to citations in their distribution across fields and their density in average values per publication. The humanities, natural sciences, and engineering disciplines have a much lower presence of social media metrics. Twitter has a stronger focus on general medicine and social sciences. Other sources (blog, Facebook, Google+, and news media mentions) are more prominent in regards to multidisciplinary journals. Originality/value - This paper reinforces the relevance of Mendeley as a social media source for analytical purposes from a disciplinary perspective, being particularly relevant for the social sciences (together with Twitter). Key implications for the use of social media metrics on the evaluation of research performance (e.g. the concentration of some social media metrics, such as blogs, news items, etc., around multidisciplinary journals) are identified.
    Date
    20. 1.2015 18:30:22
  6. Arapakis, I.; Cambazoglu, B.B.; Lalmas, M.: On the feasibility of predicting popular news at cold start (2017) 0.07
    0.06970411 = product of:
      0.13940822 = sum of:
        0.13940822 = product of:
          0.27881643 = sum of:
            0.27881643 = weight(_text_:news in 3595) [ClassicSimilarity], result of:
              0.27881643 = score(doc=3595,freq=26.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0440419 = fieldWeight in 3595, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Prominent news sites on the web provide hundreds of news articles daily. The abundance of news content competing to attract online attention, coupled with the manual effort involved in article selection, necessitates the timely prediction of future popularity of these news articles. The future popularity of a news article can be estimated using signals indicating the article's penetration in social media (e.g., number of tweets) in addition to traditional web analytics (e.g., number of page views). In practice, it is important to make such estimations as early as possible, preferably before the article is made available on the news site (i.e., at cold start). In this paper we perform a study on cold-start news popularity prediction using a collection of 13,319 news articles obtained from Yahoo News, a major news provider. We characterize the popularity of news articles through a set of online metrics and try to predict their values across time using machine learning techniques on a large collection of features obtained from various sources. Our findings indicate that predicting news popularity at cold start is a difficult task, contrary to the findings of a prior work on the same topic. Most articles' popularity may not be accurately anticipated solely on the basis of content features, without having the early-stage popularity values.
  7. Sela, M.; Lavie, T.; Inbar, O.; Oppenheim, I.; Meyer, J.: Personalizing news content : an experimental study (2015) 0.07
    0.06696954 = product of:
      0.13393909 = sum of:
        0.13393909 = product of:
          0.26787817 = sum of:
            0.26787817 = weight(_text_:news in 1604) [ClassicSimilarity], result of:
              0.26787817 = score(doc=1604,freq=24.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0030831 = fieldWeight in 1604, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1604)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The delivery of personalized news content depends on the ability to predict user interests. We evaluated different methods for acquiring user profiles based on declared and actual interest in various news topics and items. In an experiment, 36 students rated their interest in six news topics and in specific news items and read on 6 days standard, nonpersonalized editions and personalized (basic or adaptive) news editions. We measured subjective satisfaction with the editions and expressed preferences, along with objective measures, to infer actual interest in items. Users' declared interest in news topics did not strongly predict their actual interest in specific news items. Satisfaction with all news editions was high, but participants preferred the personalized editions. User interest was weakly correlated with reading duration, article length, and reading order. Different measures predicted interest in different news topics. Explicit measures predicted interest in relatively clearly defined topics such as sports, but were less appropriate for broader topics such as science and technology. Our results indicate that explicit and implicit methods should be combined to generate user profiles. We suggest that a personalized newspaper should contain both general information and personalized items, selected based on specific combinations of measures for each of the different news topics. Based on the findings, we present a general model to decide on the personalization of news content to generate personalized editions for readers.
  8. Hajibayova, L.; Jacob, E.K.: Investigation of levels of abstraction in user-generated tagging vocabularies : a case of wild or tamed categorization? (2014) 0.06
    0.0630699 = product of:
      0.1261398 = sum of:
        0.1261398 = sum of:
          0.07732976 = weight(_text_:news in 1451) [ClassicSimilarity], result of:
            0.07732976 = score(doc=1451,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 1451, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1451)
          0.048810046 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
            0.048810046 = score(doc=1451,freq=4.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.27358043 = fieldWeight in 1451, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1451)
      0.5 = coord(1/2)
    
    Abstract
    Previous studies of user-generated vocabularies (e.g., Golder & Huberman, 2006; Munk & Mork, 2007b; Yoon, 2009) have proposed that a primary source of tag agreement across users is due to wide-spread use of tags at the basic level of abstraction. However, an investigation of levels of abstraction in user-generated tagging vocabularies did not support this notion. This study analyzed approximately 8000 tags generated by 40 subjects. Analysis of 7617 tags assigned to 36 online resources representing four content categories (TOOL, FRUIT, CLOTHING, VEHICLE) and three resource genres (news article, blog, ecommerce) did not find statistically significant preferences in the assignment of tags at the superordinate, subordinate or basic levels of abstraction. Within the framework of Heidegger's (1953/1996) notion of handiness , observed variations in the preferred level of abstraction are both natural and phenomenological in that perception and understanding -- and thus the meaning of "things" -- arise out of the individual's contextualized experiences of engaging with objects. Operationalization of superordinate, subordinate and basic levels of abstraction using Heidegger's notion of handiness may be able to account for differences in the everyday experiences and activities of taggers, thereby leading to a better understanding of user-generated tagging vocabularies.
    Date
    5. 9.2014 16:22:27
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  9. Dupont, J.: Falsch! (2017) 0.06
    0.06186381 = product of:
      0.12372762 = sum of:
        0.12372762 = product of:
          0.24745524 = sum of:
            0.24745524 = weight(_text_:news in 3470) [ClassicSimilarity], result of:
              0.24745524 = score(doc=3470,freq=8.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.9266084 = fieldWeight in 3470, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3470)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Mit erfundenen Meldungen lässt sich im Internet viel Geld und politisch Stimmung machen. Wie kann man sie erkennen? Ein Beitrag zum Thema Fake News. Fazit: "Ein Rezept, mit dem man Fake News erkennt, gibt es noch nicht. Bader und Rinsdorf sind aber auf der Suche danach. Im Moment analysieren sie Tausende Fake News auf auffällige Textmerkmale. Anschließend sollen Algorithmen programmiert werden, die in Sekundenschnelle erkennen, mit welcher Wahrscheinlichkeit es sich um eine gefälschte Nachricht handelt. Bader und Rinsdorf: "Wer die Nachrichtenvielfalt nutzt, sich breit informiert und Informationen im Netz mit einer gewissen Grundskepsis begegnet, kann Fake News schneller durchschauen.""
  10. Arapakis, I.; Lalmas, M.; Cambazoglu, B.B.; MarcosM.-C.; Jose, J.M.: User engagement in online news : under the scope of sentiment, interest, affect, and gaze (2014) 0.06
    0.057997324 = product of:
      0.11599465 = sum of:
        0.11599465 = product of:
          0.2319893 = sum of:
            0.2319893 = weight(_text_:news in 1497) [ClassicSimilarity], result of:
              0.2319893 = score(doc=1497,freq=18.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.8686954 = fieldWeight in 1497, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1497)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online content providers, such as news portals and social media platforms, constantly seek new ways to attract large shares of online attention by keeping their users engaged. A common challenge is to identify which aspects of online interaction influence user engagement the most. In this article, through an analysis of a news article collection obtained from Yahoo News US, we demonstrate that news articles exhibit considerable variation in terms of the sentimentality and polarity of their content, depending on factors such as news provider and genre. Moreover, through a laboratory study, we observe the effect of sentimentality and polarity of news and comments on a set of subjective and objective measures of engagement. In particular, we show that attention, affect, and gaze differ across news of varying interestingness. As part of our study, we also explore methods that exploit the sentiments expressed in user comments to reorder the lists of comments displayed in news pages. Our results indicate that user engagement can be anticipated predicted if we account for the sentimentality and polarity of the content as well as other factors that drive attention and inspire human curiosity.
  11. Aranyi, G.; Schaik, P. van: Testing a model of user-experience with news websites : how research questions evolve (2016) 0.06
    0.057997324 = product of:
      0.11599465 = sum of:
        0.11599465 = product of:
          0.2319893 = sum of:
            0.2319893 = weight(_text_:news in 3009) [ClassicSimilarity], result of:
              0.2319893 = score(doc=3009,freq=18.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.8686954 = fieldWeight in 3009, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3009)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although the Internet has become a major source for accessing news, there is little research regarding users' experience with news sites. We conducted an experiment to test a comprehensive model of user experience with news sites that was developed previously by means of an online survey. Level of adoption (novel or adopted site) was controlled with a between-subjects manipulation. We collected participants' answers to psychometric scales at 2 times: after presentation of 5 screenshots of a news site and directly after 10 minutes of hands-on experience with the site. The model was extended with the prediction of users' satisfaction with news sites as a high-level design goal. A psychometric measure of trust in news providers was developed and added to the model to better predict people's intention to use particular news sites. The model presented in this article represents a theoretically founded, empirically tested basis for evaluating news websites, and it holds theoretical relevance to user-experience research in general. Finally, the findings and the model are applied to provide practical guidance in design prioritization.
  12. Bünte, O.: Bundesdatenschutzbeauftragte bezweifelt Facebooks Datenschutzversprechen (2018) 0.06
    0.055921838 = product of:
      0.111843675 = sum of:
        0.111843675 = sum of:
          0.07732976 = weight(_text_:news in 4180) [ClassicSimilarity], result of:
            0.07732976 = score(doc=4180,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4180)
          0.034513917 = weight(_text_:22 in 4180) [ClassicSimilarity], result of:
            0.034513917 = score(doc=4180,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 4180, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4180)
      0.5 = coord(1/2)
    
    Date
    23. 3.2018 13:41:22
    Footnote
    Vgl. zum Hintergrund auch: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election; https://www.nytimes.com/2018/03/18/us/cambridge-analytica-facebook-privacy-data.html; http://www.latimes.com/business/la-fi-tn-facebook-cambridge-analytica-sued-20180321-story.html; https://www.tagesschau.de/wirtschaft/facebook-cambridge-analytica-103.html; http://www.spiegel.de/netzwelt/web/cambridge-analytica-der-eigentliche-skandal-liegt-im-system-facebook-kolumne-a-1199122.html; http://www.spiegel.de/netzwelt/netzpolitik/cambridge-analytica-facebook-sieht-sich-im-datenskandal-als-opfer-a-1199095.html; https://www.heise.de/newsticker/meldung/Datenskandal-um-Cambridge-Analytica-Facebook-sieht-sich-als-Opfer-3999922.html.
  13. Taglinger, H.: Ausgevogelt, jetzt wird es ernst (2018) 0.06
    0.055921838 = product of:
      0.111843675 = sum of:
        0.111843675 = sum of:
          0.07732976 = weight(_text_:news in 4281) [ClassicSimilarity], result of:
            0.07732976 = score(doc=4281,freq=2.0), product of:
              0.26705483 = queryWeight, product of:
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.05094824 = queryNorm
              0.28956512 = fieldWeight in 4281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2416887 = idf(docFreq=635, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4281)
          0.034513917 = weight(_text_:22 in 4281) [ClassicSimilarity], result of:
            0.034513917 = score(doc=4281,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.19345059 = fieldWeight in 4281, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4281)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:38:55
    Source
    https://www.heise.de/tp/news/Ausgevogelt-jetzt-wird-es-ernst-3934458.html?view=print
  14. Zhao, X.; Jin, P.; Yue, L.: Discovering topic time from web news (2015) 0.05
    0.0546804 = product of:
      0.1093608 = sum of:
        0.1093608 = product of:
          0.2187216 = sum of:
            0.2187216 = weight(_text_:news in 2673) [ClassicSimilarity], result of:
              0.2187216 = score(doc=2673,freq=16.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.81901383 = fieldWeight in 2673, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic time reflects the temporal feature of topics in Web news pages, which can be used to establish and analyze topic models for many time-sensitive text mining tasks. However, there are two critical challenges in discovering topic time from Web news pages. The first issue is how to normalize different kinds of temporal expressions within a Web news page, e.g., explicit and implicit temporal expressions, into a unified representation framework. The second issue is how to determine the right topic time for topics in Web news. Aiming at solving these two problems, we propose a systematic framework for discovering topic time from Web news. In particular, for the first issue, we propose a new approach that can effectively determine the appropriate referential time for implicit temporal expressions and further present an effective defuzzification algorithm to find the right explanation for a fuzzy temporal expression. For the second issue, we propose a relation model to describe the relationship between news topics and topic time. Based on this model, we design a new algorithm to extract topic time from Web news. We build a prototype system called Topic Time Parser (TTP) and conduct extensive experiments to measure the effectiveness of our proposal. The results suggest that our proposal is effective in both temporal expression normalization and topic time extraction.
  15. Lehmann, J.; Castillo, C.; Lalmas, M.; Baeza-Yates, R.: Story-focused reading in online news and its potential for user engagement (2017) 0.05
    0.0546804 = product of:
      0.1093608 = sum of:
        0.1093608 = product of:
          0.2187216 = sum of:
            0.2187216 = weight(_text_:news in 3529) [ClassicSimilarity], result of:
              0.2187216 = score(doc=3529,freq=16.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.81901383 = fieldWeight in 3529, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3529)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We study the news reading behavior of several hundred thousand users on 65 highly visited news sites. We focus on a specific phenomenon: users reading several articles related to a particular news development, which we call story-focused reading. Our goal is to understand the effect of story-focused reading on user engagement and how news sites can support this phenomenon. We found that most users focus on stories that interest them and that even casual news readers engage in story-focused reading. During story-focused reading, users spend more time reading and a larger number of news sites are involved. In addition, readers employ different strategies to find articles related to a story. We also analyze how news sites promote story-focused reading by looking at how they link their articles to related content published by them, or by other sources. The results show that providing links to related content leads to a higher engagement of the users, and that this is the case even for links to external sites. We also show that the performance of links can be affected by their type, their position, and how many of them are present within an article.
  16. Graf, A.: Vorsicht Falle (2019) 0.05
    0.0546804 = product of:
      0.1093608 = sum of:
        0.1093608 = product of:
          0.2187216 = sum of:
            0.2187216 = weight(_text_:news in 5222) [ClassicSimilarity], result of:
              0.2187216 = score(doc=5222,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.81901383 = fieldWeight in 5222, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5222)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Überall nur Fake News? Das Mantra von der gefährlichen Manipulation im Netz ist selbst gefährlich, sagen Forscher. Warum Desinformation so wenig greifbar ist.
    Series
    Thema: Fake news
  17. Montalvo, S.; Martínez, R.; Fresno, V.; Delgado, A.: Exploiting named entities for bilingual news clustering (2015) 0.05
    0.051874384 = product of:
      0.10374877 = sum of:
        0.10374877 = product of:
          0.20749754 = sum of:
            0.20749754 = weight(_text_:news in 1642) [ClassicSimilarity], result of:
              0.20749754 = score(doc=1642,freq=10.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.7769848 = fieldWeight in 1642, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1642)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this article, we present a new algorithm for clustering a bilingual collection of comparable news items in groups of specific topics. Our hypothesis is that named entities (NEs) are more informative than other features in the news when clustering fine grained topics. The algorithm does not need as input any information related to the number of clusters, and carries out the clustering only based on information regarding the shared named entities of the news items. This proposal is evaluated using different data sets and outperforms other state-of-the-art algorithms, thereby proving the plausibility of the approach. In addition, because the applicability of our approach depends on the possibility of identifying equivalent named entities among the news, we propose a heuristic system to identify equivalent named entities in the same and different languages, thereby obtaining good performance.
  18. Aranyi, G.; Schaik, P. van: Modeling user experience with news websites (2015) 0.05
    0.051874384 = product of:
      0.10374877 = sum of:
        0.10374877 = product of:
          0.20749754 = sum of:
            0.20749754 = weight(_text_:news in 2332) [ClassicSimilarity], result of:
              0.20749754 = score(doc=2332,freq=10.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.7769848 = fieldWeight in 2332, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2332)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although news websites are used by a large and increasing number of people, there is a lack of research within human-computer interaction regarding users' experience with this type of interactive technology. In the current research, existing measures of user-experience factors were identified and, using an online survey, answers to psychometric scales to measure website characteristics, need fulfillment, affective reactions, and constructs of technology acceptance and user experience were collected from regular users of news sites. A comprehensive user-experience model was formulated to explain acceptance and quality judgments of news sites. The main contribution of the current study is the application of influential models of user experience and technology acceptance to the domain of online news. By integrating both types of variable in a comprehensive model, the relationships between the types of variable are clarified both theoretically and empirically. Implications of the model for theory, further research, and system design are discussed.
  19. Kanan, T.; Fox, E.A.: Automated arabic text classification with P-Stemmer, machine learning, and a tailored news article taxonomy (2016) 0.05
    0.05114883 = product of:
      0.10229766 = sum of:
        0.10229766 = product of:
          0.20459533 = sum of:
            0.20459533 = weight(_text_:news in 3151) [ClassicSimilarity], result of:
              0.20459533 = score(doc=3151,freq=14.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.76611733 = fieldWeight in 3151, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3151)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Arabic news articles in electronic collections are difficult to study. Browsing by category is rarely supported. Although helpful machine-learning methods have been applied successfully to similar situations for English news articles, limited research has been completed to yield suitable solutions for Arabic news. In connection with a Qatar National Research Fund (QNRF)-funded project to build digital library community and infrastructure in Qatar, we developed software for browsing a collection of about 237,000 Arabic news articles, which should be applicable to other Arabic news collections. We designed a simple taxonomy for Arabic news stories that is suitable for the needs of Qatar and other nations, is compatible with the subject codes of the International Press Telecommunications Council, and was enhanced with the aid of a librarian expert as well as five Arabic-speaking volunteers. We developed tailored stemming (i.e., a new Arabic light stemmer called P-Stemmer) and automatic classification methods (the best being binary Support Vector Machines classifiers) to work with the taxonomy. Using evaluation techniques commonly used in the information retrieval community, including 10-fold cross-validation and the Wilcoxon signed-rank test, we showed that our approach to stemming and classification is superior to state-of-the-art techniques.
  20. Wilkinson, D.; Thelwall, M.: Trending Twitter topics in English : an international comparison (2012) 0.05
    0.047354616 = product of:
      0.09470923 = sum of:
        0.09470923 = product of:
          0.18941846 = sum of:
            0.18941846 = weight(_text_:news in 375) [ClassicSimilarity], result of:
              0.18941846 = score(doc=375,freq=12.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.7092868 = fieldWeight in 375, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=375)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The worldwide span of the microblogging service Twitter provides an opportunity to make international comparisons of trending topics of interest, such as news stories. Previous international comparisons of news interests have tended to use surveys and may bypass topics not well covered in the mainstream media. This study uses 9 months of English-language Tweets from the United Kingdom, United States, India, South Africa, New Zealand, and Australia. Based upon the top 50 trending keywords in each country from the 0.5 billion Tweets collected, festivals or religious events are the most common, followed by media events, politics, human interest, and sports. U.S. trending topics have the most interest in the other countries and Indian trending topics the least. Conversely, India is the most interested in other countries' trending topics and the United States the least. This gives evidence of an international hierarchy of perceived importance or relevance with some issues, such as the international interest in U.S. Thanksgiving celebrations, apparently not being directly driven by the media. This hierarchy echoes, and may be caused by, similar news coverage trends. Although the current imbalanced international news coverage does not seem to be out of step with public news interests, the political implication is that the Twitter-using public reflects, and hence seems to implicitly accept, international imbalances in news media agenda setting rather than combating them. This is an issue for those believing that these imbalances make the media too powerful.

Authors

Languages

  • e 501
  • d 156
  • a 1
  • More… Less…

Types

  • el 51
  • b 4
  • More… Less…

Themes