Search (8 results, page 1 of 1)

  • × theme_ss:"Informationsmittel"
  • × year_i:[2020 TO 2030}
  1. Chi, Y.; He, D.; Jeng, W.: Laypeople's source selection in online health information-seeking process (2020) 0.02
    0.017417828 = product of:
      0.034835655 = sum of:
        0.034835655 = sum of:
          0.0067973635 = weight(_text_:a in 34) [ClassicSimilarity], result of:
            0.0067973635 = score(doc=34,freq=10.0), product of:
              0.04772363 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.041389145 = queryNorm
              0.14243183 = fieldWeight in 34, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
          0.028038291 = weight(_text_:22 in 34) [ClassicSimilarity], result of:
            0.028038291 = score(doc=34,freq=2.0), product of:
              0.14493774 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041389145 = queryNorm
              0.19345059 = fieldWeight in 34, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=34)
      0.5 = coord(1/2)
    
    Abstract
    For laypeople, searching online health information resources can be challenging due to topic complexity and the large number of online sources with differing quality. The goal of this article is to examine, among all the available online sources, which online sources laypeople select to address their health-related information needs, and whether or how much the severity of a health condition influences their selection. Twenty-four participants were recruited individually, and each was asked (using a retrieval system called HIS) to search for information regarding a severe health condition and a mild health condition, respectively. The selected online health information sources were automatically captured by the HIS system and classified at both the website and webpage levels. Participants' selection behavior patterns were then plotted across the whole information-seeking process. Our results demonstrate that laypeople's source selection fluctuates during the health information-seeking process, and also varies by the severity of health conditions. This study reveals laypeople's real usage of different types of online health information sources, and engenders implications to the design of search engines, as well as the development of health literacy programs.
    Date
    12.11.2020 13:22:09
    Type
    a
  2. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.02
    0.016434235 = product of:
      0.03286847 = sum of:
        0.03286847 = product of:
          0.16434234 = sum of:
            0.16434234 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.16434234 = score(doc=5669,freq=2.0), product of:
                0.35089764 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041389145 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  3. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.00
    0.0016993409 = product of:
      0.0033986818 = sum of:
        0.0033986818 = product of:
          0.0067973635 = sum of:
            0.0067973635 = weight(_text_:a in 5505) [ClassicSimilarity], result of:
              0.0067973635 = score(doc=5505,freq=10.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.14243183 = fieldWeight in 5505, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5505)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
    Type
    a
  4. Zhao, D.; Strotmann, A.: Intellectual structure of information science 2011-2020 : an author co-citation analysis (2022) 0.00
    0.0016085497 = product of:
      0.0032170995 = sum of:
        0.0032170995 = product of:
          0.006434199 = sum of:
            0.006434199 = weight(_text_:a in 610) [ClassicSimilarity], result of:
              0.006434199 = score(doc=610,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.13482209 = fieldWeight in 610, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=610)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose This study continues a long history of author co-citation analysis of the intellectual structure of information science into the time period of 2011-2020. It also examines changes in this structure from 2006-2010 through 2011-2015 to 2016-2020. Results will contribute to a better understanding of the information science research field. Design/methodology/approach The well-established procedures and techniques for author co-citation analysis were followed. Full records of research articles in core information science journals published during 2011-2020 were retrieved and downloaded from the Web of Science database. About 150 most highly cited authors in each of the two five-year time periods were selected from this dataset to represent this field, and their co-citation counts were calculated. Each co-citation matrix was input into SPSS for factor analysis, and results were visualized in Pajek. Factors were interpreted as specialties and labeled upon an examination of articles written by authors who load primarily on each factor. Findings The two-camp structure of information science continued to be present clearly. Bibliometric indicators for research evaluation dominated the Knowledge Domain Analysis camp during both fivr-year time periods, whereas interactive information retrieval (IR) dominated the IR camp during 2011-2015 but shared dominance with information behavior during 2016-2020. Bridging between the two camps became increasingly weaker and was only provided by the scholarly communication specialty during 2016-2020. The IR systems specialty drifted further away from the IR camp. The information behavior specialty experienced a deep slump during 2011-2020 in its evolution process. Altmetrics grew to dominate the Webometrics specialty and brought it to a sharp increase during 2016-2020. Originality/value Author co-citation analysis (ACA) is effective in revealing intellectual structures of research fields. Most related studies used term-based methods to identify individual research topics but did not examine the interrelationships between these topics or the overall structure of the field. The few studies that did discuss the overall structure paid little attention to the effect of changes to the source journals on the results. The present study does not have these problems and continues the long history of benchmark contributions to a better understanding of the information science field using ACA.
    Type
    a
  5. Dobusch, L.: NRW zahlt 2,6 Millionen für drei Jahre Online-Brockhaus an Schulen : Statt Wikipedia und Klexikon (2021) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 136) [ClassicSimilarity], result of:
              0.004863797 = score(doc=136,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 136, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=136)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Humborg, C.: Wie Wikimedia den Zugang zu Wissen stärkt (2022) 0.00
    0.0012159493 = product of:
      0.0024318986 = sum of:
        0.0024318986 = product of:
          0.004863797 = sum of:
            0.004863797 = weight(_text_:a in 1211) [ClassicSimilarity], result of:
              0.004863797 = score(doc=1211,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10191591 = fieldWeight in 1211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Zhao, D.; Strotmann, A.: Mapping knowledge domains on Wikipedia : an author bibliographic coupling analysis of traditional Chinese medicine (2022) 0.00
    0.0010530431 = product of:
      0.0021060861 = sum of:
        0.0021060861 = product of:
          0.0042121722 = sum of:
            0.0042121722 = weight(_text_:a in 608) [ClassicSimilarity], result of:
              0.0042121722 = score(doc=608,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.088261776 = fieldWeight in 608, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose Wikipedia has the lofty goal of compiling all human knowledge. The purpose of the present study is to map the structure of the Traditional Chinese Medicine (TCM) knowledge domain on Wikipedia, to identify patterns of knowledge representation on Wikipedia and to test the applicability of author bibliographic coupling analysis, an effective method for mapping knowledge domains represented in published scholarly documents, for Wikipedia data. Design/methodology/approach We adapted and followed the well-established procedures and techniques for author bibliographic coupling analysis (ABCA). Instead of bibliographic data from a citation database, we used all articles on TCM downloaded from the English version of Wikipedia as our dataset. An author bibliographic coupling network was calculated and then factor analyzed using SPSS. Factor analysis results were visualized. Factors were labeled upon manual examination of articles that authors who load primarily in each factor have significantly contributed references to. Clear factors were interpreted as topics. Findings Seven TCM topic areas are represented on Wikipedia, among which Acupuncture-related practices, Falun Gong and Herbal Medicine attracted the most significant contributors to TCM. Acupuncture and Qi Gong have the most connections to the TCM knowledge domain and also serve as bridges for other topics to connect to the domain. Herbal medicine is weakly linked to and non-herbal medicine is isolated from the rest of the TCM knowledge domain. It appears that specific topics are represented well on Wikipedia but their conceptual connections are not. ABCA is effective for mapping knowledge domains on Wikipedia but document-based bibliographic coupling analysis is not. Originality/value Given the prominent position of Wikipedia for both information users and for researchers on knowledge organization and information retrieval, it is important to study how well knowledge is represented and structured on Wikipedia. Such studies appear largely missing although studies from different perspectives both about Wikipedia and using Wikipedia as data are abundant. Author bibliographic coupling analysis is effective for mapping knowledge domains represented in published scholarly documents but has never been applied to mapping knowledge domains represented on Wikipedia.
    Type
    a
  8. Gierke, B.: ¬Der Fachinformationsdienst Buch-, Bibliotheks- und Informationswissenschaft : eine Kurzvorstellung (2020) 0.00
    9.11962E-4 = product of:
      0.001823924 = sum of:
        0.001823924 = product of:
          0.003647848 = sum of:
            0.003647848 = weight(_text_:a in 5709) [ClassicSimilarity], result of:
              0.003647848 = score(doc=5709,freq=2.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.07643694 = fieldWeight in 5709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5709)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a

Languages

Types

Themes