Search (13 results, page 1 of 1)

  • × theme_ss:"Informationsmittel"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Luyt, B.: Wikipedia, collective memory, and the Vietnam war (2016) 0.02
    0.023077954 = product of:
      0.092311814 = sum of:
        0.03307401 = weight(_text_:libraries in 3054) [ClassicSimilarity], result of:
          0.03307401 = score(doc=3054,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 3054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3054)
        0.059237804 = weight(_text_:case in 3054) [ClassicSimilarity], result of:
          0.059237804 = score(doc=3054,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34001783 = fieldWeight in 3054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3054)
      0.25 = coord(2/8)
    
    Abstract
    Wikipedia is increasingly an important source of information for many. Hence, it is important to develop an understanding of how it is situated within society and the wider roles it is called onto perform. This article argues that one of these roles is as a depository of collective memory. Building on the work of Pentzold, I present a case study of the English Wikipedia article on the Vietnam War to demonstrate that the article, or more accurately, its talk pages, provide a forum for the contestation of collective memory. I further argue that this function is one that should be supported by libraries as they position themselves within a rapidly changing digital world.
  2. Okoli, C.; Mehdi, M.; Mesgari, M.; Nielsen, F.A.; Lanamäki, A.: Wikipedia in the eyes of its beholders : a systematic review of scholarly research on Wikipedia readers and readership (2014) 0.02
    0.018814957 = product of:
      0.07525983 = sum of:
        0.05915282 = weight(_text_:studies in 1540) [ClassicSimilarity], result of:
          0.05915282 = score(doc=1540,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.37408823 = fieldWeight in 1540, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1540)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 1540) [ClassicSimilarity], result of:
              0.03221402 = score(doc=1540,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 1540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1540)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Hundreds of scholarly studies have investigated various aspects of Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment but also for more serious topics such as health and legal information. Scholars, librarians, and students are common users, and Wikipedia provides a unique opportunity for educating students in digital literacy. We conclude with a summary of key findings, implications for researchers, and implications for the Wikipedia community.
    Date
    18.11.2014 13:22:03
  3. Kohn, R.S.: Of Descartes and of train schedules : Evaluating the Encyclopedia Judaica, Wikipedia, and other general and Jewish Studies encyclopedias (2010) 0.01
    0.0061617517 = product of:
      0.049294014 = sum of:
        0.049294014 = weight(_text_:studies in 3633) [ClassicSimilarity], result of:
          0.049294014 = score(doc=3633,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 3633, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3633)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The purpose of this paper is to discuss the second edition of the Encyclopaedia Judaica (2007) within its broader historical context of the production of encyclopedias in the twentieth and the twenty-first centuries. The paper contrasts the 2007 edition of the Encyclopaedia Judaica to the Jewish Encyclopedia published between 1901 and 1905, and to the first edition of the Encyclopaedia Judaica published in 1972; then contrasts the 2007 edition of the Encyclopaedia Judaica to Wikipedia and to other projects of online encyclopedias. Design/methodology/approach - The paper provides a personal reflective review of the sources in question. Findings - That Encyclopaedia Judaica in its latest edition does not adequately replace the original first edition in terms of depth of scholarly work. It is considered that the model offered by Wikipedia could work well for the Encyclopaedia Judaica, allowing it to retain the core of the expert knowledge, and at the same time channel the energy of volunteer editors which has made Wikipedia such a success. Practical implications - The paper is of interest to those with an interest in encyclopedia design or Jewish studies. Originality/value - This paper provides a unique reflection on the latest edition of the encyclopedia and considers future models for its publication based on traditional and non-traditional methods.
  4. Bhavnani, S.K.; Peck, F.A.: Scatter matters : regularities and implications for the scatter of healthcare information on the Web (2010) 0.01
    0.00522842 = product of:
      0.04182736 = sum of:
        0.04182736 = weight(_text_:studies in 3433) [ClassicSimilarity], result of:
          0.04182736 = score(doc=3433,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 3433, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3433)
      0.125 = coord(1/8)
    
    Abstract
    Despite the development of huge healthcare Web sites and powerful search engines, many searchers end their searches prematurely with incomplete information. Recent studies suggest that users often retrieve incomplete information because of the complex scatter of relevant facts about a topic across Web pages. However, little is understood about regularities underlying such information scatter. To probe regularities within the scatter of facts across Web pages, this article presents the results of two analyses: (a) a cluster analysis of Web pages that reveals the existence of three page clusters that vary in information density and (b) a content analysis that suggests the role each of the above-mentioned page clusters play in providing comprehensive information. These results provide implications for the design of Web sites, search tools, and training to help users find comprehensive information about a topic and for a hypothesis describing the underlying mechanisms causing the scatter. We conclude by briefly discussing how the analysis of information scatter, at the granularity of facts, complements existing theories of information-seeking behavior.
  5. Mesgari, M.; Okoli, C.; Mehdi, M.; Nielsen, F.A.; Lanamäki, A.: ¬"The sum of all human knowledge" : a systematic review of scholarly research on the content of Wikipedia (2015) 0.00
    0.0043570166 = product of:
      0.034856133 = sum of:
        0.034856133 = weight(_text_:studies in 1624) [ClassicSimilarity], result of:
          0.034856133 = score(doc=1624,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 1624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1624)
      0.125 = coord(1/8)
    
    Abstract
    Wikipedia may be the best-developed attempt thus far to gather all human knowledge in one place. Its accomplishments in this regard have made it a point of inquiry for researchers from different fields of knowledge. A decade of research has thrown light on many aspects of the Wikipedia community, its processes, and its content. However, due to the variety of fields inquiring about Wikipedia and the limited synthesis of the extensive research, there is little consensus on many aspects of Wikipedia's content as an encyclopedic collection of human knowledge. This study addresses the issue by systematically reviewing 110 peer-reviewed publications on Wikipedia content, summarizing the current findings, and highlighting the major research trends. Two major streams of research are identified: the quality of Wikipedia content (including comprehensiveness, currency, readability, and reliability) and the size of Wikipedia. Moreover, we present the key research trends in terms of the domains of inquiry, research design, data source, and data gathering methods. This review synthesizes scholarly understanding of Wikipedia content and paves the way for future studies.
  6. Zielinski, K.; Nielek, R.; Wierzbicki, A.; Jatowt, A.: Computing controversy : formal model and algorithms for detecting controversy on Wikipedia and in search queries (2018) 0.00
    0.0043570166 = product of:
      0.034856133 = sum of:
        0.034856133 = weight(_text_:studies in 5093) [ClassicSimilarity], result of:
          0.034856133 = score(doc=5093,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 5093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5093)
      0.125 = coord(1/8)
    
    Abstract
    Controversy is a complex concept that has been attracting attention of scholars from diverse fields. In the era of Internet and social media, detecting controversy and controversial concepts by the means of automatic methods is especially important. Web searchers could be alerted when the contents they consume are controversial or when they attempt to acquire information on disputed topics. Presenting users with the indications and explanations of the controversy should offer them chance to see the "wider picture" rather than letting them obtain one-sided views. In this work we first introduce a formal model of controversy as the basis of computational approaches to detecting controversial concepts. Then we propose a classification based method for automatic detection of controversial articles and categories in Wikipedia. Next, we demonstrate how to use the obtained results for the estimation of the controversy level of search queries. The proposed method can be incorporated into search engines as a component responsible for detection of queries related to controversial topics. The method is independent of the search engine's retrieval and search results recommendation algorithms, and is therefore unaffected by a possible filter bubble. Our approach can be also applied in Wikipedia or other knowledge bases for supporting the detection of controversy and content maintenance. Finally, we believe that our results could be useful for social science researchers for understanding the complex nature of controversy and in fostering their studies.
  7. Ofek, N.; Rokach, L.: ¬A classifier to determine which Wikipedia biographies will be accepted (2015) 0.00
    0.0039860546 = product of:
      0.031888437 = sum of:
        0.031888437 = product of:
          0.06377687 = sum of:
            0.06377687 = weight(_text_:area in 1610) [ClassicSimilarity], result of:
              0.06377687 = score(doc=1610,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32663327 = fieldWeight in 1610, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1610)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    Wikipedia, like other encyclopedias, includes biographies of notable people. However, because it is jointly written by many contributors, it is subject to constant manipulation by contributors attempting to add biographies of non-notable people. Over time, Wikipedia has developed inclusion criteria for notable people (e.g., receiving a significant award) based on which newly contributed biographies are evaluated. In this paper we present and analyze a set of simple indicators that can be used to predict which article will eventually be accepted. These indicators do not refer to the content itself, but to meta-content features (such as the number of categories that the biography is associated with) and to author-based features (such as if it is a first-time author). By training a classifier on these features, we successfully reached a high predictive performance (area under the receiver operating characteristic [ROC] curve [AUC] of 0.97) even though we overlooked the actual biography text.
  8. Schumann, L.; Stock, W.G.: ¬Ein umfassendes ganzheitliches Modell für Evaluation und Akzeptanzanalysen von Informationsdiensten : Das Information Service Evaluation (ISE) Modell (2014) 0.00
    0.002348939 = product of:
      0.018791512 = sum of:
        0.018791512 = product of:
          0.037583023 = sum of:
            0.037583023 = weight(_text_:22 in 1492) [ClassicSimilarity], result of:
              0.037583023 = score(doc=1492,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.2708308 = fieldWeight in 1492, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1492)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 9.2014 18:56:46
  9. Hartmann, B.: Ab ins MoMA : zum virtuellen Museumsgang (2011) 0.00
    0.0020133762 = product of:
      0.01610701 = sum of:
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 1821) [ClassicSimilarity], result of:
              0.03221402 = score(doc=1821,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 1821, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1821)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    3. 5.1997 8:44:22
  10. Cho, H.; Chen, M.-H.; Chung, S.: Testing an integrative theoretical model of knowledge-sharing behavior in the context of Wikipedia (2010) 0.00
    0.0020133762 = product of:
      0.01610701 = sum of:
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 3460) [ClassicSimilarity], result of:
              0.03221402 = score(doc=3460,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 3460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3460)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    1. 6.2010 10:13:22
  11. Martínez-Ávila, D.; Chaves Guimarães, J.A.; Pinho, F.A.; Fox, M.J.: ¬The representation of ethics and knowledge organization in the WoS and LISTA databases (2015) 0.00
    0.0020133762 = product of:
      0.01610701 = sum of:
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 2358) [ClassicSimilarity], result of:
              0.03221402 = score(doc=2358,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 2358, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2358)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    17. 2.2018 16:50:22
  12. Höhn, S.: Stalins Badezimmer in Wikipedia : Die Macher der Internet-Enzyklopädie diskutieren über Verantwortung und Transparenz. Der Brockhaus kehrt dagegen zur gedruckten Ausgabe zurück. (2012) 0.00
    0.0011863933 = product of:
      0.009491147 = sum of:
        0.009491147 = product of:
          0.018982293 = sum of:
            0.018982293 = weight(_text_:22 in 2171) [ClassicSimilarity], result of:
              0.018982293 = score(doc=2171,freq=4.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.13679022 = fieldWeight in 2171, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2171)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Content
    Der neue Herausgeber des Brockhaus, ein Tochterverlag von Bertelsmann, hat unterdessen angekündigt, zum gedruckten Lexikon zurückzukehren. Etwa Anfang 2015 soll die 22. Auflage erscheinen. In Zeiten des virtuellen Informationsoverkills gebe es einen Bedarf an Orientierung, an Relevanzvorgaben, sagt Geschäftsführer Christoph Hünermann. Ausgerechnet Bertelsmann druckte 2008 ein knapp 1 000 Seiten langes Wikipedia-Lexikon mit den 50 000 meist gesuchten Begriffen. Eine Experten-Redaktion überprüfte die Einträge sicherheitshalber zuvor - soll allerdings kaum Fehler gefunden haben."
    Source
    Frankfurter Rundschau. Nr.76 vom 29.3.2012, S.22-23
  13. Haubner, S.: "Als einfacher Benutzer ist man rechtlos" : Unter den freiwilligen Wikipedia-Mitarbeitern regt sich Unmut über die Administratoren (2011) 0.00
    8.3890674E-4 = product of:
      0.006711254 = sum of:
        0.006711254 = product of:
          0.013422508 = sum of:
            0.013422508 = weight(_text_:22 in 4567) [ClassicSimilarity], result of:
              0.013422508 = score(doc=4567,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.09672529 = fieldWeight in 4567, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4567)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    3. 5.1997 8:44:22