Search (428 results, page 1 of 22)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.49
    0.4934142 = product of:
      0.82235694 = sum of:
        0.04187847 = product of:
          0.1256354 = sum of:
            0.1256354 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.1256354 = score(doc=1000,freq=2.0), product of:
                0.26825202 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031640913 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 1000) [ClassicSimilarity], result of:
              0.01609953 = score(doc=1000,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.01861633 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.01861633 = score(doc=1000,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.1256354 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.1256354 = score(doc=1000,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.6 = coord(9/15)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.45
    0.4455868 = product of:
      0.95482886 = sum of:
        0.05025416 = product of:
          0.15076247 = sum of:
            0.15076247 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.15076247 = score(doc=862,freq=2.0), product of:
                0.26825202 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031640913 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.15076247 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15076247 = score(doc=862,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.15076247 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15076247 = score(doc=862,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.15076247 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15076247 = score(doc=862,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.15076247 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15076247 = score(doc=862,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.15076247 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15076247 = score(doc=862,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.15076247 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.15076247 = score(doc=862,freq=2.0), product of:
            0.26825202 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031640913 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.46666667 = coord(7/15)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Lewandowski, D.: Suchmaschinen verstehen : 3. vollständig überarbeitete und erweiterte Aufl. (2021) 0.04
    0.042080346 = product of:
      0.21040173 = sum of:
        0.07902959 = weight(_text_:suchmaschine in 4016) [ClassicSimilarity], result of:
          0.07902959 = score(doc=4016,freq=4.0), product of:
            0.17890577 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.031640913 = queryNorm
            0.44173864 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
        0.10504468 = sum of:
          0.01609953 = weight(_text_:online in 4016) [ClassicSimilarity], result of:
            0.01609953 = score(doc=4016,freq=2.0), product of:
              0.096027054 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.031640913 = queryNorm
              0.16765618 = fieldWeight in 4016, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4016)
          0.08894515 = weight(_text_:recherche in 4016) [ClassicSimilarity], result of:
            0.08894515 = score(doc=4016,freq=6.0), product of:
              0.17150146 = queryWeight, product of:
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.031640913 = queryNorm
              0.5186262 = fieldWeight in 4016, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4016)
        0.026327467 = weight(_text_:web in 4016) [ClassicSimilarity], result of:
          0.026327467 = score(doc=4016,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.25496176 = fieldWeight in 4016, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4016)
      0.2 = coord(3/15)
    
    Abstract
    Suchmaschinen dienen heute selbstverständlich als Werkzeuge, um Informationen zu recherchieren. Doch wie funktionieren diese genau? Das Buch betrachtet Suchmaschinen aus vier Perspektiven: Technik, Nutzung, Recherche und gesellschaftliche Bedeutung. Es bietet eine klar strukturierte und verständliche Einführung in die Thematik. Zahlreiche Abbildungen erlauben eine schnelle Erfassung des Stoffs. Rankingverfahren und Nutzerverhalten werden dargestellt. Dazu kommen grundlegende Betrachtungen des Suchmaschinenmarkts, der Suchmaschinenoptimierung und der Rolle der Suchmaschinen als technische Informationsvermittler. Das Buch richtet sich an alle, die ein umfassendes Verständnis dieser Suchwerkzeuge erlangen wollen, u.a. Suchmaschinenoptimierer, Entwickler, Informationswissenschaftler, Bibliothekare sowie Online-Marketing-Verantwortliche. Für die dritte Auflage wurde der Text vollständig überarbeitet sowie alle Statistiken und Quellen auf den neuesten Stand gebracht.
    RSWK
    Suchmaschine
    World Wide Web Recherche
    Subject
    Suchmaschine
    World Wide Web Recherche
  4. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.04
    0.035462637 = product of:
      0.13298488 = sum of:
        0.011384088 = product of:
          0.022768175 = sum of:
            0.022768175 = weight(_text_:online in 993) [ClassicSimilarity], result of:
              0.022768175 = score(doc=993,freq=4.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23710167 = fieldWeight in 993, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
        0.092267185 = weight(_text_:evaluation in 993) [ClassicSimilarity], result of:
          0.092267185 = score(doc=993,freq=18.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.69518 = fieldWeight in 993, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=993)
        0.01861633 = weight(_text_:web in 993) [ClassicSimilarity], result of:
          0.01861633 = score(doc=993,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=993)
        0.010717267 = product of:
          0.021434534 = sum of:
            0.021434534 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.021434534 = score(doc=993,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.26666668 = coord(4/15)
    
    Abstract
    Evaluating the quality of online health information (OHI) is a major challenge facing consumers. We designed PageGraph, an interface that displays quality indicators and associated values for a webpage, based on credibility evaluation models, the nudge theory, and existing empirical research concerning professionals' and consumers' evaluation of OHI quality. A qualitative evaluation of the interface with 16 participants revealed that PageGraph rendered the information and presentation nudges as intended. It provided the participants with easier access to quality indicators, encouraged fresh angles to assess information credibility, provided an evaluation framework, and encouraged validation of initial judgments. We then conducted a quantitative evaluation of the interface involving 60 participants using a between-subject experimental design. The control group used a regular web browser and evaluated the credibility of 12 preselected webpages, whereas the experimental group evaluated the same webpages with the assistance of PageGraph. PageGraph did not significantly influence participants' evaluation results. The results may be attributed to the insufficiency of the saliency and structure of the nudges implemented and the webpage stimuli's lack of sensitivity to the intervention. Future directions for applying nudges to support OHI evaluation were discussed.
    Date
    22. 6.2023 18:18:34
  5. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.03
    0.034900956 = product of:
      0.13087858 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 640) [ClassicSimilarity], result of:
              0.019319436 = score(doc=640,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
        0.046685066 = weight(_text_:software in 640) [ClassicSimilarity], result of:
          0.046685066 = score(doc=640,freq=4.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.3719205 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.052194204 = weight(_text_:evaluation in 640) [ClassicSimilarity], result of:
          0.052194204 = score(doc=640,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.3932532 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.022339594 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.022339594 = score(doc=640,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
      0.26666668 = coord(4/15)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  6. Lewandowski, D.: Suchmaschinen (2023) 0.02
    0.01969897 = product of:
      0.14774227 = sum of:
        0.11614931 = weight(_text_:suchmaschine in 793) [ClassicSimilarity], result of:
          0.11614931 = score(doc=793,freq=6.0), product of:
            0.17890577 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.031640913 = queryNorm
            0.6492206 = fieldWeight in 793, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
        0.031592958 = weight(_text_:web in 793) [ClassicSimilarity], result of:
          0.031592958 = score(doc=793,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.3059541 = fieldWeight in 793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=793)
      0.13333334 = coord(2/15)
    
    Abstract
    Eine Suchmaschine (auch: Web-Suchmaschine, Universalsuchmaschine) ist ein Computersystem, das Inhalte aus dem World Wide Web (WWW) mittels Crawling erfasst und über eine Benutzerschnittstelle durchsuchbar macht, wobei die Ergebnisse in einer nach systemseitig angenommener Relevanz geordneten Darstellung aufgeführt werden. Dies bedeutet, dass Suchmaschinen im Gegensatz zu anderen Informationssystemen nicht auf einem klar abgegrenzten Datenbestand aufbauen, sondern diesen aus den verstreut vorliegenden Dokumenten des WWW zusammenstellen. Dieser Datenbestand wird über eine Benutzerschnittstelle zugänglich gemacht, die so gestaltet ist, dass die Suchmaschine von Laien problemlos genutzt werden kann. Die zu einer Suchanfrage ausgegebenen Treffer werden so sortiert, dass den Nutzenden die aus Systemsicht relevantesten Dokumente zuerst angezeigt werden. Dabei handelt es sich um komplexe Bewertungsverfahren, denen zahlreiche Annahmen über die Relevanz von Dokumenten in Bezug auf Suchanfragen zugrunde liegen.
  7. Dorsch, I.; Haustein, S.: Bibliometrie (2023) 0.02
    0.019094937 = product of:
      0.14321202 = sum of:
        0.10439389 = weight(_text_:soziale in 789) [ClassicSimilarity], result of:
          0.10439389 = score(doc=789,freq=2.0), product of:
            0.19331455 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031640913 = queryNorm
            0.5400209 = fieldWeight in 789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.0625 = fieldNorm(doc=789)
        0.03881812 = product of:
          0.07763624 = sum of:
            0.07763624 = weight(_text_:analyse in 789) [ClassicSimilarity], result of:
              0.07763624 = score(doc=789,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.46569893 = fieldWeight in 789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0625 = fieldNorm(doc=789)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Die Bibliometrie ist eine sozialwissenschaftliche Disziplin, die historisch gesehen auf drei Entwicklungen fußt: die positivistisch-funktionalistische Philosophie, soziale Fakten objektiv untersuchen zu können; die Entwicklung von Zitationsindizes und -analyse, um Forschungsleistung zu messen; und die Entdeckung mathematischer Gesetzmäßigkeiten, die die Anwendung von Indikatoren in der Wissenschaftsevaluation ermöglichten.
  8. Ortega, J.L.: Classification and analysis of PubPeer comments : how a web journal club is used (2022) 0.02
    0.019063784 = product of:
      0.09531892 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 544) [ClassicSimilarity], result of:
              0.019319436 = score(doc=544,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=544)
          0.5 = coord(1/2)
        0.022339594 = weight(_text_:web in 544) [ClassicSimilarity], result of:
          0.022339594 = score(doc=544,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=544)
        0.06331961 = weight(_text_:site in 544) [ClassicSimilarity], result of:
          0.06331961 = score(doc=544,freq=2.0), product of:
            0.1738463 = queryWeight, product of:
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.031640913 = queryNorm
            0.3642275 = fieldWeight in 544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.494352 = idf(docFreq=493, maxDocs=44218)
              0.046875 = fieldNorm(doc=544)
      0.2 = coord(3/15)
    
    Abstract
    This study explores the use of PubPeer by the scholarly community, to understand the issues discussed in an online journal club, the disciplines most commented on, and the characteristics of the most prolific users. A sample of 39,985 posts about 24,779 publications were extracted from PubPeer in 2019 and 2020. These comments were divided into seven categories according to their degree of seriousness (Positive review, Critical review, Lack of information, Honest errors, Methodological flaws, Publishing fraud, and Manipulation). The results show that more than two-thirds of comments are posted to report some type of misconduct, mainly about image manipulation. These comments generate most discussion and take longer to be posted. By discipline, Health Sciences and Life Sciences are the most discussed research areas. The results also reveal "super commenters," users who access the platform to systematically review publications. The study ends by discussing how various disciplines use the site for different purposes.
  9. Bärnreuther, K.: Informationskompetenz-Vermittlung für Schulklassen mit Wikipedia und dem Framework Informationskompetenz in der Hochschulbildung (2021) 0.02
    0.017796142 = product of:
      0.13347106 = sum of:
        0.12061034 = sum of:
          0.03346225 = weight(_text_:online in 299) [ClassicSimilarity], result of:
            0.03346225 = score(doc=299,freq=6.0), product of:
              0.096027054 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.031640913 = queryNorm
              0.34846687 = fieldWeight in 299, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.046875 = fieldNorm(doc=299)
          0.08714809 = weight(_text_:recherche in 299) [ClassicSimilarity], result of:
            0.08714809 = score(doc=299,freq=4.0), product of:
              0.17150146 = queryWeight, product of:
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.031640913 = queryNorm
              0.50814784 = fieldWeight in 299, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.046875 = fieldNorm(doc=299)
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 299) [ClassicSimilarity], result of:
              0.02572144 = score(doc=299,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 299, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=299)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Das Framework Informationskompetenz in der Hochschulbildung bietet sich als bibliotheksdidaktischer Rahmen auch schon für Kursangebote für Schulklassen an: obwohl es für die Angebote an Hochschulen und Universitäten konzipiert wurde, bereiten die Kollegstufen deutscher Gymnasien auf wissenschaftliche Karrieren vor; bibliothekarische Angebote für angehende Studierende können und sollten sich daher ebenfalls schon nach dem Framework richten. Informationskompetenz praxis- und lebensnah an Schüler*innen zu vermitteln, kann mit dem Framework als didaktischem Rahmen und praktisch am Beispiel der bei Lernenden und Lehrenden gleichermaßen beliebten und gleichzeitig viel gescholtenen Online-Enzyklopädie Wikipedia gelingen. Nicht nur wegen der zahlreichen Corona-bedingten Bibliotheksschließungen sollten angehende Abiturient*innen im Bereich der Online-Recherche zu reflektierten und kritischen Nutzer*innen ausgebildet werden. Im Rahmen des Frameworks kann praktisch am Beispiel Wikipedia Informationskompetenz vermittelt werden, die die Teilnehmenden unserer Kurse von der Wikipedia ausgehend auf die allgemeine Online-Recherche und auch auf alle anderen Bereiche des wissenschaftlichen Arbeitens übertragen können.
    Source
    o-bib: Das offene Bibliotheksjournal. 8(2021) Nr.2, S.1-22
  10. Kerst, V.; Ruhose, F.: Schleichender Blackout : wie wir das digitale Desaster verhindern (2023) 0.02
    0.017441638 = product of:
      0.08720819 = sum of:
        0.0056348355 = product of:
          0.011269671 = sum of:
            0.011269671 = weight(_text_:online in 1186) [ClassicSimilarity], result of:
              0.011269671 = score(doc=1186,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.11735933 = fieldWeight in 1186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1186)
          0.5 = coord(1/2)
        0.064590424 = weight(_text_:soziale in 1186) [ClassicSimilarity], result of:
          0.064590424 = score(doc=1186,freq=4.0), product of:
            0.19331455 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031640913 = queryNorm
            0.33412087 = fieldWeight in 1186, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1186)
        0.016982928 = product of:
          0.033965856 = sum of:
            0.033965856 = weight(_text_:analyse in 1186) [ClassicSimilarity], result of:
              0.033965856 = score(doc=1186,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20374328 = fieldWeight in 1186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1186)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    Schluss mit der Digitalisierung ohne Sinn und Verstand! Kritische Bestandsaufnahme und Wege aus der Krise. Elektronische Krankenakte, digitaler Unterricht oder einfach nur der Versuch, sich online beim Finanzamt anzumelden: Das Thema digitale Infrastruktur brennt uns unter den Nägeln - privat wie beruflich. Oft haben wir den Eindruck, dass die Digitalisierung in Deutschland lediglich eine "Elektrifizierung der Verwaltung" ist. Weshalb stockt die digitale Transformation von Behörden, Schulen und Firmen? Wie ist es um die Datensicherheit bestellt und wie gefährlich sind Cyberattacken für das öffentliche Leben? Und noch wichtiger: Wie könnten die Lösungen aussehen, die aus den vielen Herausforderungen eine Chance machen würden? - Das Buch zur digitalen Transformation: Warum tut sich Deutschland so schwer damit? - Digitalisierungsstrategie: Die richtige Balance zwischen Blockade und Gießkannenprinzip - Cybersicherheit: Wie sich kritische Infrastruktur vor Hackerangriffen schützen lässt - Digitale Verwaltung: Der Weg zum (wieder) leistungsfähigen Sozialstaat - Demokratie in Gefahr: Plattformstrategie für einen resilienten Staat - digital wie analog Herausforderung Digitalisierung: Strategien für Verwaltung, Wirtschaft und Gesellschaft Das Autorenduo Valentina Kerst, frühere Staatssekretärin in Thüringen und heutige Digitale Strategie-Beraterin für Organisationen und Unternehmen, und Fedor Ruhose, Staatssekretär in Rheinland-Pfalz für Digitalisierungskonzepte, legt ein gut zu lesendes, hochinformatives und zukunftsweisendes Sachbuch vor. Nach einer gründlichen Analyse der Faktoren, die die Digitalisierung bremsen, wird sowohl für Bürgerinnen und Bürger als auch für die Staatsorgane aufgezeigt, was sie zu einer positiven Entwicklung beitragen können. Für Entscheidungsträger genauso gewinnbringend zu lesen wie für alle, die bei dieser gesellschaftspolitischen Debatte mitreden und die digitale Transformation mitgestalten wollen!
    RSWK
    Computerkriminalität, Hacking / Digital- und Informationstechnologien: soziale und ethische Aspekte / Digitale- oder Internetökonomie / Informatik und Informationstechnologie / Informationstechnik (IT), allgemeine Themen / Internet, allgemein / Kommunal- und Regionalverwaltung / Maker und Hacker-Kultur / Medienwissenschaften: Internet, digitale Medien und Gesellschaft / Politik der Kommunal-, Regional- Landes- und Lokalpolitik / Öffentliche Verwaltung / Öffentlicher Dienst und öffentlicher Sektor / Deutschland / Abhängigkeit / Angriff / Blackout / Cyberraum / Cybersicherheit / Demokratie / Demokratiesicherung
    Subject
    Computerkriminalität, Hacking / Digital- und Informationstechnologien: soziale und ethische Aspekte / Digitale- oder Internetökonomie / Informatik und Informationstechnologie / Informationstechnik (IT), allgemeine Themen / Internet, allgemein / Kommunal- und Regionalverwaltung / Maker und Hacker-Kultur / Medienwissenschaften: Internet, digitale Medien und Gesellschaft / Politik der Kommunal-, Regional- Landes- und Lokalpolitik / Öffentliche Verwaltung / Öffentlicher Dienst und öffentlicher Sektor / Deutschland / Abhängigkeit / Angriff / Blackout / Cyberraum / Cybersicherheit / Demokratie / Demokratiesicherung
  11. Meineck, S.: Gesichter-Suchmaschine PimEyes bricht das Schweigen : Neuer Chef (2022) 0.01
    0.0129054785 = product of:
      0.19358216 = sum of:
        0.19358216 = weight(_text_:suchmaschine in 418) [ClassicSimilarity], result of:
          0.19358216 = score(doc=418,freq=6.0), product of:
            0.17890577 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.031640913 = queryNorm
            1.0820342 = fieldWeight in 418, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.078125 = fieldNorm(doc=418)
      0.06666667 = coord(1/15)
    
    Abstract
    PimEyes untergräbt die Anonymität von Menschen, deren Gesicht im Internet zu finden ist. Nach breiter Kritik hatte sich die polnische Suchmaschine auf die Seychellen abgesetzt. Jetzt hat PimEyes einen neuen Chef - und geht an die Öfffentlichkeit.
    Source
    https://netzpolitik.org/2022/neuer-chef-gesichter-suchmaschine-pimeyes-bricht-das-schweigen/?utm_source=pocket-newtab-global-de-DE
  12. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.01
    0.012873508 = product of:
      0.06436754 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 992) [ClassicSimilarity], result of:
              0.01609953 = score(doc=992,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
        0.045600507 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.045600507 = score(doc=992,freq=12.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.010717267 = product of:
          0.021434534 = sum of:
            0.021434534 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.021434534 = score(doc=992,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
  13. Wang, H.; Song, Y.-Q.; Wang, L.-T.: Memory model for web ad effect based on multimodal features (2020) 0.01
    0.012820014 = product of:
      0.0961501 = sum of:
        0.043495167 = weight(_text_:evaluation in 5512) [ClassicSimilarity], result of:
          0.043495167 = score(doc=5512,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.327711 = fieldWeight in 5512, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5512)
        0.052654933 = weight(_text_:web in 5512) [ClassicSimilarity], result of:
          0.052654933 = score(doc=5512,freq=16.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.5099235 = fieldWeight in 5512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5512)
      0.13333334 = coord(2/15)
    
    Abstract
    Web ad effect evaluation is a challenging problem in web marketing research. Although the analysis of web ad effectiveness has achieved excellent results, there are still some deficiencies. First, there is a lack of an in-depth study of the relevance between advertisements and web content. Second, there is not a thorough analysis of the impacts of users and advertising features on user browsing behaviors. And last, the evaluation index of the web advertisement effect is not adequate. Given the above problems, we conducted our work by studying the observer's behavioral pattern based on multimodal features. First, we analyze the correlation between ads and links with different searching results and further assess the influence of relevance on the observer's attention to web ads using eye-movement features. Then we investigate the user's behavioral sequence and propose the directional frequent-browsing pattern algorithm for mining the user's most commonly used browsing patterns. Finally, we offer the novel use of "memory" as a new measure of advertising effectiveness and further build an advertising memory model with integrated multimodal features for predicting the efficacy of web ads. A large number of experiments have proved the superiority of our method.
  14. Fielitz, M.; Marcks, H.: Digitaler Faschismus : die sozialen Medien afs Motor des Rechtsextremismus (2020) 0.01
    0.012385167 = product of:
      0.09288875 = sum of:
        0.047216423 = sum of:
          0.011269671 = weight(_text_:online in 344) [ClassicSimilarity], result of:
            0.011269671 = score(doc=344,freq=2.0), product of:
              0.096027054 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.031640913 = queryNorm
              0.11735933 = fieldWeight in 344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.02734375 = fieldNorm(doc=344)
          0.035946753 = weight(_text_:recherche in 344) [ClassicSimilarity], result of:
            0.035946753 = score(doc=344,freq=2.0), product of:
              0.17150146 = queryWeight, product of:
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.031640913 = queryNorm
              0.20960028 = fieldWeight in 344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4202437 = idf(docFreq=531, maxDocs=44218)
                0.02734375 = fieldNorm(doc=344)
        0.045672327 = weight(_text_:soziale in 344) [ClassicSimilarity], result of:
          0.045672327 = score(doc=344,freq=2.0), product of:
            0.19331455 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031640913 = queryNorm
            0.23625913 = fieldWeight in 344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.02734375 = fieldNorm(doc=344)
      0.13333334 = coord(2/15)
    
    Abstract
    Digitale Revolution: Chance oder Gefahr für die Demokratie? Einst galten das Internet und die sozialen Medien als Chance auf unbegrenzten Zugang zu Wissen - und damit als Basis für eine neue Hochphase der demokratischen Debattenkultur. Doch stattdessen sind wir heute mit Hass im Netz, Fake-News und Verschwörungstheorien konfrontiert. Rechte Parteien und Organisationen wie die AfD, Pegida und die Identitäre Bewegung können fast ungehindert ihre Ideologien verbreiten. Doch dabei handelt es sich nicht um eine reine »Online-Radikalisierung«. Das beweist die Welle rechtsmotivierter Gewalt wie die Anschläge von Halle und Hanau und eine wachsende Akzeptanz rechter Positionen in der Bevölkerung. Maik Fielitz und Holger Marcks analysieren diese Entwicklung und gehen den Ursachen auf den Grund: Die Rolle der sozialen Medien beim Erstarken des Ultranationalismus und rechts motivierter Straftaten Die Manipulationstechniken der Rechtsextremen: Verwirrung stiften, Ängste schüren und Mehrheitsverhältnisse verzerren Rechtsextreme Kommunikation im Internet: Verschwörungstheorien, Bedrohungsmythen, Lügen und Hassbotschaften Die sozialen Medien als digitaler Brandbeschleuniger: Fakten, Hintergründe und Analysen Selbstregulation oder politische Eingriffe? Auswege aus der digitalen Hasskultur Die autoritäre Revolte stellt eine große Herausforderung für Demokratien und offene Gesellschaften dar. Wie können wir rechtsextremen Tendenzen begegnen? Politik aber auch Internetkonzerne sind aufgerufen, zu handeln. Wie lässt sich der »digitale Faschismus« bändigen, ohne unser Recht auf freie Meinungsäußerung einzuschränken? Diese Fragen diskutieren Maik Fielitz und Holger Marcks intensiv. Sie untersuchen die manipulativen Strategien und psychologischen Tricks der rechtsextremen Akteure und zeigen mögliche Auswege aus der Misere. Ihr Sachbuch ist ein wichtiger Beitrag zur politischen Debatte!
    Die sozialen Medien haben sich zu einem Raum des Hasses und der Unwahrheit entwickelt. Ohne diese digitalen Brandbeschleuniger sind die rechtsextremen Wahlerfolge ebenso wenig zu verstehen wie die jüngste Welle rechter Gewalt. Maik Fielitz und Holger Marcks gehen dieser Entwicklung und ihren Ursachen auf den Grund. Sie zeigen, mit welchen manipulativen Techniken rechtsextreme Akteure in den sozialen Medien versuchen, Ängste zu verstärken, Verwirrung zu stiften und Mehrheitsverhältnisse zu verzerren. Dass ihr Wirken dabei eine solche Dynamik entfalten kann, hat wiederum mit der Funktionsweise der sozialen Medien selbst zu tun. Denn sie begünstigen die Entstehung und Verbreitung von Bedrohungsmythen, die der führungslosen Masse der Wutbürger eine Richtung geben. Wie aber ließe sich dieser "digitale Faschismus" bändigen, ohne die Werte der offenen Gesellschaft in Mitleidenschaft zu ziehen? "Das Ergebnis ihrer Recherche ist so besorgniserregend, dass die Forscher die Rechten als "die neuen Gatekeeper" bezeichnen. So kursierten etwa in den Wochen nach der Kölner Silvesternacht fünf Mal mehr muslimfeindliche Posts als üblich. Gravierender noch: Rechte Sprache und Sprachbilder seien inzwischen Bestandteil des öffentlichen Diskurses. Plötzlich sei etwa viel von "Nafris" die Rede gewesen - eine Polizeibezeichnung für Nordafrikaner" (deutschlandfunkkultur.de)
    Footnote
    Rez. u.d.T.: Witte-Petit, K.: Radikale Rattenfänger : zwei Forscher beschreiben den 'digitalen Faschismus' in: Rheinpfalz vom 19.08.2021 [Fielitz-Marcks_Rez_RP_20210819.pdf]: "Es gibt mittlerweile viele Bücher darüber, wie soziale Medien die Verbreitung extremen Gedankenguts erleichtern. Das vom Rechtsextremismusforscher Maik Fielitz und dem Radikalisierungsexperten Holger. Marcks vorgelegte Buch "Digitaler Faschismus" gehört zu den wirklich lesenswerten. Leider macht es aber auch wenig Hoffnung."
  15. Michel, A.: Informationsdidaktik für verschiedene Wissenskulturen (2020) 0.01
    0.011727352 = product of:
      0.08795513 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 5735) [ClassicSimilarity], result of:
              0.019319436 = score(doc=5735,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 5735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5735)
          0.5 = coord(1/2)
        0.07829542 = weight(_text_:soziale in 5735) [ClassicSimilarity], result of:
          0.07829542 = score(doc=5735,freq=2.0), product of:
            0.19331455 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031640913 = queryNorm
            0.40501565 = fieldWeight in 5735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.046875 = fieldNorm(doc=5735)
      0.13333334 = coord(2/15)
    
    Abstract
    In den vergangenen Monaten sind in Password Online eine ganze Reihe von Beiträgen erschienen, die sich mit dem Thema Informationskompetenz auseinandergesetzt haben. Sie alle hatten einen unterschiedlichen Fokus, es einte sie jedoch eine eher kritische Perspektive auf ein universelles Kompetenzset, aus dem sich "Informationskompetenz" ergibt. Spannend ist insbesondere im Kontext des aktuellen, lebhaften Diskurses zu Fake News, dass einige Autor*innen explizit soziale und emotionale Faktoren als relevante Kriterien für den Umgang mit Information betonen. (Mit diesem Text und dem sich anschließenden Beitrag von Inka Tappenbeck möchten wir auf die "wissenskulturelle Praxis" als einen weiteren Faktor genauer eingehen, der prägt, was in unterschiedlichen Kontexten als Informationskompetenz zu verstehen ist).
  16. Weiß, E.-M.: ChatGPT soll es richten : Microsoft baut KI in Suchmaschine Bing ein (2023) 0.01
    0.011662631 = product of:
      0.17493945 = sum of:
        0.17493945 = weight(_text_:suchmaschine in 866) [ClassicSimilarity], result of:
          0.17493945 = score(doc=866,freq=10.0), product of:
            0.17890577 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.031640913 = queryNorm
            0.9778302 = fieldWeight in 866, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0546875 = fieldNorm(doc=866)
      0.06666667 = coord(1/15)
    
    Abstract
    ChatGPT, die künstliche Intelligenz der Stunde, ist von OpenAI entwickelt worden. Und OpenAI ist in der Vergangenheit nicht unerheblich von Microsoft unterstützt worden. Nun geht es ums Profitieren: Die KI soll in die Suchmaschine Bing eingebaut werden, was eine direkte Konkurrenz zu Googles Suchalgorithmen und Intelligenzen bedeutet. Bing war da bislang nicht sonderlich erfolgreich. Wie "The Information" mit Verweis auf zwei Insider berichtet, plant Microsoft, ChatGPT in seine Suchmaschine Bing einzubauen. Bereits im März könnte die neue, intelligente Suche verfügbar sein. Microsoft hatte zuvor auf der hauseigenen Messe Ignite zunächst die Integration des Bildgenerators DALL·E 2 in seine Suchmaschine angekündigt - ohne konkretes Startdatum jedoch. Fragt man ChatGPT selbst, bestätigt der Chatbot seine künftige Aufgabe noch nicht. Weiß aber um potentielle Vorteile.
    Source
    https://www.heise.de/news/ChatGPT-soll-es-richten-Microsoft-baut-KI-in-Suchmaschine-Bing-ein-7447837.html
  17. Positionspapier der DMV zur Verwendung bibliometrischer Daten (2020) 0.01
    0.011446448 = product of:
      0.085848354 = sum of:
        0.011269671 = product of:
          0.022539342 = sum of:
            0.022539342 = weight(_text_:online in 5738) [ClassicSimilarity], result of:
              0.022539342 = score(doc=5738,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23471867 = fieldWeight in 5738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5738)
          0.5 = coord(1/2)
        0.07457868 = weight(_text_:evaluation in 5738) [ClassicSimilarity], result of:
          0.07457868 = score(doc=5738,freq=6.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.5619073 = fieldWeight in 5738, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5738)
      0.13333334 = coord(2/15)
    
    Abstract
    Bibliometrische Daten werden heute zunehmend in der Evaluation von Forschungsergebnissen benutzt. Diese Anwendungen reichen von der (indirekten) Verwendung bei der Peer-Evaluation von Drittmittelanträgen über die Beurteilung von Bewerbungen in Berufungskommissionen oder Anträgen für Forschungszulagen bis hin zur systematischen Erhebung von forschungsorientierten Kennzahlen von Institutionen. Mit diesem Dokument will die DMV ihren Mitgliedern eine Diskussionsgrundlage zur Verwendung bibliometrischer Daten im Zusammenhang mit der Evaluation von Personen und Institutionen im Fachgebiet Mathematik zur Verfügung stellen, insbesondere auch im Vergleich zu anderen Fächern. Am Ende des Texts befindet sich ein Glossar, in dem die wichtigsten Begriffe kurz erläutert werden.
    Issue
    Online: 21.02.2020.
  18. Baroncini, S.; Sartini, B.; Erp, M. Van; Tomasi, F.; Gangemi, A.: Is dc:subject enough? : A landscape on iconography and iconology statements of knowledge graphs in the semantic web (2023) 0.01
    0.011307154 = product of:
      0.08480365 = sum of:
        0.05501752 = weight(_text_:evaluation in 1030) [ClassicSimilarity], result of:
          0.05501752 = score(doc=1030,freq=10.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.41452527 = fieldWeight in 1030, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.03125 = fieldNorm(doc=1030)
        0.029786127 = weight(_text_:web in 1030) [ClassicSimilarity], result of:
          0.029786127 = score(doc=1030,freq=8.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.2884563 = fieldWeight in 1030, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1030)
      0.13333334 = coord(2/15)
    
    Abstract
    In the last few years, the size of Linked Open Data (LOD) describing artworks, in general or domain-specific Knowledge Graphs (KGs), is gradually increasing. This provides (art-)historians and Cultural Heritage professionals with a wealth of information to explore. Specifically, structured data about iconographical and iconological (icon) aspects, i.e. information about the subjects, concepts and meanings of artworks, are extremely valuable for the state-of-the-art of computational tools, e.g. content recognition through computer vision. Nevertheless, a data quality evaluation for art domains, fundamental for data reuse, is still missing. The purpose of this study is filling this gap with an overview of art-historical data quality in current KGs with a focus on the icon aspects. Design/methodology/approach This study's analyses are based on established KG evaluation methodologies, adapted to the domain by addressing requirements from art historians' theories. The authors first select several KGs according to Semantic Web principles. Then, the authors evaluate (1) their structures' suitability to describe icon information through quantitative and qualitative assessment and (2) their content, qualitatively assessed in terms of correctness and completeness. Findings This study's results reveal several issues on the current expression of icon information in KGs. The content evaluation shows that these domain-specific statements are generally correct but often not complete. The incompleteness is confirmed by the structure evaluation, which highlights the unsuitability of the KG schemas to describe icon information with the required granularity. Originality/value The main contribution of this work is an overview of the actual landscape of the icon information expressed in LOD. Therefore, it is valuable to cultural institutions by providing them a first domain-specific data quality evaluation. Since this study's results suggest that the selected domain information is underrepresented in Semantic Web datasets, the authors highlight the need for the creation and fostering of such information to provide a more thorough art-historical dimension to LOD.
    Theme
    Semantic Web
  19. Harlan, E.; Köppen, U.; Schnuck, O.; Wreschniok, L.: Fragwürdige Personalauswahl mit Algorithmen : KI zur Persönlichkeitsanalyse (2021) 0.01
    0.01104443 = product of:
      0.082833216 = sum of:
        0.0440151 = weight(_text_:software in 143) [ClassicSimilarity], result of:
          0.0440151 = score(doc=143,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.35064998 = fieldWeight in 143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=143)
        0.03881812 = product of:
          0.07763624 = sum of:
            0.07763624 = weight(_text_:analyse in 143) [ClassicSimilarity], result of:
              0.07763624 = score(doc=143,freq=2.0), product of:
                0.16670908 = queryWeight, product of:
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.031640913 = queryNorm
                0.46569893 = fieldWeight in 143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.268782 = idf(docFreq=618, maxDocs=44218)
                  0.0625 = fieldNorm(doc=143)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    Anbieter neuer Software versprechen Vorstellungsgespräche, die vorurteilsfreier ablaufen sollen: Eine "Künstliche Intelligenz" erstellt Persönlichkeitsprofile von Bewerbern anhand kurzer Videos. Doch die Technologie hat Schwächen, wie eine Analyse des BR zeigt
  20. Pee, L.G.; Pan, S.L.: Social informatics of information value cocreation : a case study of xiaomi's online user community (2020) 0.01
    0.0108351065 = product of:
      0.05417553 = sum of:
        0.008049765 = product of:
          0.01609953 = sum of:
            0.01609953 = weight(_text_:online in 5766) [ClassicSimilarity], result of:
              0.01609953 = score(doc=5766,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.16765618 = fieldWeight in 5766, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5766)
          0.5 = coord(1/2)
        0.027509436 = weight(_text_:software in 5766) [ClassicSimilarity], result of:
          0.027509436 = score(doc=5766,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.21915624 = fieldWeight in 5766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5766)
        0.01861633 = weight(_text_:web in 5766) [ClassicSimilarity], result of:
          0.01861633 = score(doc=5766,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 5766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5766)
      0.2 = coord(3/15)
    
    Abstract
    The perennial issue of information value creation needs to be understood in the contemporary era of a more networked user environment enabled by information technology (IT). This mixed-methods study investigates information value cocreation from the social informatics perspective to surface sociotechnical implications for IT design and use, since cocreation is inherently social and technology-mediated. Specifically, the cocreation of software as an information-intensive product is examined. Data on the cocreation of Xiaomi's MIUI firmware were collected from two sources: 49 interviews of staff and user participants and web crawling of the cocreation platform. They were analyzed with interpretive analysis, topic modeling, and social network analysis for triangulation. Findings indicate three sociotechnical information practices co-constituted by information, IT, people, and their activities. Each practice is instrumental in rapidly and continuously converting external information into cocreated information value. The adsorption information practice attracts new and diverse external information; the absorption practice integrates external and internal information rapidly by involving users; the desorption practice allows rapid adoption of the cocreated product so that information value can be realized and demonstrated for further cocreation. Critically analyzing these practices reveals unanticipated or paradoxical issues affecting the design and use of common cocreation technology such as discussion forums.

Languages

  • e 294
  • d 130
  • pt 3
  • More… Less…

Types

  • a 373
  • el 89
  • m 20
  • p 6
  • s 4
  • x 2
  • A 1
  • EL 1
  • r 1
  • More… Less…

Subjects

Classifications