Search (9 results, page 1 of 1)

  • × theme_ss:"Informationsmittel"
  • × year_i:[2020 TO 2030}
  1. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.04
    0.042254727 = product of:
      0.084509455 = sum of:
        0.057778623 = product of:
          0.17333587 = sum of:
            0.17333587 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.17333587 = score(doc=5669,freq=2.0), product of:
                0.37010026 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043654136 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.009977593 = weight(_text_:in in 5669) [ClassicSimilarity], result of:
          0.009977593 = score(doc=5669,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.16802745 = fieldWeight in 5669, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
        0.01675324 = weight(_text_:und in 5669) [ClassicSimilarity], result of:
          0.01675324 = score(doc=5669,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17315367 = fieldWeight in 5669, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5669)
      0.5 = coord(3/6)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  2. Richter, P.: ¬Die Wikipedia-Story : Biografie eines Weltwunders (2020) 0.02
    0.016106669 = product of:
      0.048320007 = sum of:
        0.010709076 = weight(_text_:in in 197) [ClassicSimilarity], result of:
          0.010709076 = score(doc=197,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 197, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=197)
        0.03761093 = weight(_text_:und in 197) [ClassicSimilarity], result of:
          0.03761093 = score(doc=197,freq=14.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.38872904 = fieldWeight in 197, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=197)
      0.33333334 = coord(2/6)
    
    Abstract
    Wikipedia zählt zu den beliebtesten Websites - und das nicht ohne Grund: In ihr sammelt sich enormes Fachwissen, das kostenfrei zugänglich ist. Niemand wurde zum Milliardär, Werbung gibt es nicht und doch gehört Wikipedia zu den Top 10 aller Websites. Die Enzyklopädie ist weltweit ein Synonym für Wissen - und sie konnte sich gegen Konkurrenten von Brockhaus bis Google durchsetzen. Ihre Entwicklung in Deutschland nahm auf ganz besondere Weise Fahrt auf. Wikipedia ist soziales Experiment, bedeutendes Instrument der Freiheit und gleichzeitig geschlossene Gesellschaft. Und Pavel Richter, Wikipedianer der ersten Stunde, der fünf Jahre in Berlin die Geschäfte hinter dem Wissensriesen führte, ist dessen Biograf. Er erzählt eine Geschichte voller faszinierender Begebenheiten und auch von einigen Skandalen, Fehlern, Fakes und legendären Editierkriegen. Wikipedia ist eines der spannendsten Kulturphänomene unserer Zeit. Hier kommt das Buch dazu.
    Footnote
    Rez. in Spektrum der Wissenschaft. 2021, H.4, S.90 (K. Hochberg)
  3. Chi, Y.; He, D.; Jeng, W.: Laypeople's source selection in online health information-seeking process (2020) 0.01
    0.0064161494 = product of:
      0.019248448 = sum of:
        0.0044621155 = weight(_text_:in in 34) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=34,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 34, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=34)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 34) [ClassicSimilarity], result of:
              0.029572664 = score(doc=34,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 34, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=34)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    12.11.2020 13:22:09
  4. Gierke, B.: ¬Der Fachinformationsdienst Buch-, Bibliotheks- und Informationswissenschaft : eine Kurzvorstellung (2020) 0.01
    0.005803493 = product of:
      0.034820955 = sum of:
        0.034820955 = weight(_text_:und in 5709) [ClassicSimilarity], result of:
          0.034820955 = score(doc=5709,freq=12.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.35989314 = fieldWeight in 5709, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5709)
      0.16666667 = coord(1/6)
    
    Abstract
    Im Rahmen des DFG Förderprogramms Fachinformationsdienste (FID) nahm der FID Buch- Bibliotheks- und Informationswissenschaft, eine Kooperation der Herzog August Bibliothek Wolfenbüttel und der Universitätsbibliothek Leipzig, im Oktober 2017 seine Arbeit auf. Ziel ist, die Spitzenversorgung mit Literatur für Wissenschaftlerinnen und Wissenschaftler dieser und angrenzenden Disziplinen sicher zu stellen. Dazu hat der FID BBI ein Discovery Tool entwickelt. Grundlage dafür ist die Open-Source-Software VuFind. Eine Herausforderung für den FID BBI ist die Auswertung unterschiedlichster Datenquellen, weil die Themengebiete des FID BBI sehr weit gefächert sind. Das Portal bietet einen schnellen Rechercheeinstieg. Es ist aber auch möglich komplexere Suchanfragen zu stellen. Der Kontakt zu der wissenschaftlichen Gemeinschaft, die der FID BBI bedient, hat große Priorität, um die Ziele, die von der Deutschen Forschungsgemeinschaft gesetzt wurden, zu erfüllen. Ein erster Kontakt kann über das Nachweisportal hergestellt werden: https://katalog.fid-bbi.de.
    Source
    Information - Wissenschaft und Praxis. 71(2020) H.1, S.43-48
  5. Dobusch, L.: NRW zahlt 2,6 Millionen für drei Jahre Online-Brockhaus an Schulen : Statt Wikipedia und Klexikon (2021) 0.01
    0.0054715853 = product of:
      0.032829512 = sum of:
        0.032829512 = weight(_text_:und in 136) [ClassicSimilarity], result of:
          0.032829512 = score(doc=136,freq=6.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.33931053 = fieldWeight in 136, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=136)
      0.16666667 = coord(1/6)
    
    Abstract
    Das Land NRW erwirbt für 2,6 Millionen Euro Lizenzrechte für Online-Enzyklopädien zur Verwendung an Schulen. Es stellt sich die Frage, ob diese Entscheidung angesichts von freien Alternativen wie Wikipedia oder Klexikon ökonomisch und didaktisch vernünftig ist.
    Source
    https://netzpolitik.org/2021/statt-wikipedia-und-klexikon-nrw-zahlt-26-millionen-fuer-drei-jahre-online-brockhaus-an-schulen/
  6. Zhao, D.; Strotmann, A.: Intellectual structure of information science 2011-2020 : an author co-citation analysis (2022) 0.00
    0.0014573209 = product of:
      0.008743925 = sum of:
        0.008743925 = weight(_text_:in in 610) [ClassicSimilarity], result of:
          0.008743925 = score(doc=610,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 610, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=610)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose This study continues a long history of author co-citation analysis of the intellectual structure of information science into the time period of 2011-2020. It also examines changes in this structure from 2006-2010 through 2011-2015 to 2016-2020. Results will contribute to a better understanding of the information science research field. Design/methodology/approach The well-established procedures and techniques for author co-citation analysis were followed. Full records of research articles in core information science journals published during 2011-2020 were retrieved and downloaded from the Web of Science database. About 150 most highly cited authors in each of the two five-year time periods were selected from this dataset to represent this field, and their co-citation counts were calculated. Each co-citation matrix was input into SPSS for factor analysis, and results were visualized in Pajek. Factors were interpreted as specialties and labeled upon an examination of articles written by authors who load primarily on each factor. Findings The two-camp structure of information science continued to be present clearly. Bibliometric indicators for research evaluation dominated the Knowledge Domain Analysis camp during both fivr-year time periods, whereas interactive information retrieval (IR) dominated the IR camp during 2011-2015 but shared dominance with information behavior during 2016-2020. Bridging between the two camps became increasingly weaker and was only provided by the scholarly communication specialty during 2016-2020. The IR systems specialty drifted further away from the IR camp. The information behavior specialty experienced a deep slump during 2011-2020 in its evolution process. Altmetrics grew to dominate the Webometrics specialty and brought it to a sharp increase during 2016-2020. Originality/value Author co-citation analysis (ACA) is effective in revealing intellectual structures of research fields. Most related studies used term-based methods to identify individual research topics but did not examine the interrelationships between these topics or the overall structure of the field. The few studies that did discuss the overall structure paid little attention to the effect of changes to the source journals on the results. The present study does not have these problems and continues the long history of benchmark contributions to a better understanding of the information science field using ACA.
  7. Humborg, C.: Wie Wikimedia den Zugang zu Wissen stärkt (2022) 0.00
    0.0011898974 = product of:
      0.0071393843 = sum of:
        0.0071393843 = weight(_text_:in in 1211) [ClassicSimilarity], result of:
          0.0071393843 = score(doc=1211,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.120230645 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1211)
      0.16666667 = coord(1/6)
    
    Abstract
    Wikimedia Deutschland hat rund 150 hauptamtliche Mitarbeitende. Von den Erlösen aber kauft sich niemand eine Yacht. Ein Gastbeitrag. Online-Plattformen dominieren in vielen Bereichen unser Leben. Wie wir einkaufen, wie wir miteinander kommunizieren, wie wir Informationen sammeln - all das wird von einigen wenigen kommerziellen Plattformen mitbestimmt. Längst drängt sich der Eindruck auf, das Netz sei durchkommerzialisiert. Dabei gibt es sie noch: einige wenige Projekte im Netz, die nicht auf Profit ausgerichtet sind, sondern dem Gemeinwohl zugutekommen.
  8. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.00
    0.0010517307 = product of:
      0.006310384 = sum of:
        0.006310384 = weight(_text_:in in 5505) [ClassicSimilarity], result of:
          0.006310384 = score(doc=5505,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 5505, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
      0.16666667 = coord(1/6)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
  9. Zhao, D.; Strotmann, A.: Mapping knowledge domains on Wikipedia : an author bibliographic coupling analysis of traditional Chinese medicine (2022) 0.00
    0.0010304814 = product of:
      0.0061828885 = sum of:
        0.0061828885 = weight(_text_:in in 608) [ClassicSimilarity], result of:
          0.0061828885 = score(doc=608,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1041228 = fieldWeight in 608, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=608)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose Wikipedia has the lofty goal of compiling all human knowledge. The purpose of the present study is to map the structure of the Traditional Chinese Medicine (TCM) knowledge domain on Wikipedia, to identify patterns of knowledge representation on Wikipedia and to test the applicability of author bibliographic coupling analysis, an effective method for mapping knowledge domains represented in published scholarly documents, for Wikipedia data. Design/methodology/approach We adapted and followed the well-established procedures and techniques for author bibliographic coupling analysis (ABCA). Instead of bibliographic data from a citation database, we used all articles on TCM downloaded from the English version of Wikipedia as our dataset. An author bibliographic coupling network was calculated and then factor analyzed using SPSS. Factor analysis results were visualized. Factors were labeled upon manual examination of articles that authors who load primarily in each factor have significantly contributed references to. Clear factors were interpreted as topics. Findings Seven TCM topic areas are represented on Wikipedia, among which Acupuncture-related practices, Falun Gong and Herbal Medicine attracted the most significant contributors to TCM. Acupuncture and Qi Gong have the most connections to the TCM knowledge domain and also serve as bridges for other topics to connect to the domain. Herbal medicine is weakly linked to and non-herbal medicine is isolated from the rest of the TCM knowledge domain. It appears that specific topics are represented well on Wikipedia but their conceptual connections are not. ABCA is effective for mapping knowledge domains on Wikipedia but document-based bibliographic coupling analysis is not. Originality/value Given the prominent position of Wikipedia for both information users and for researchers on knowledge organization and information retrieval, it is important to study how well knowledge is represented and structured on Wikipedia. Such studies appear largely missing although studies from different perspectives both about Wikipedia and using Wikipedia as data are abundant. Author bibliographic coupling analysis is effective for mapping knowledge domains represented in published scholarly documents but has never been applied to mapping knowledge domains represented on Wikipedia.

Languages

Types

Themes

Classifications