Search (887 results, page 1 of 45)

  • × year_i:[2020 TO 2030}
  1. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.39
    0.39318 = product of:
      0.6989867 = sum of:
        0.04156021 = product of:
          0.12468062 = sum of:
            0.12468062 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.12468062 = score(doc=1000,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
        0.018474855 = weight(_text_:web in 1000) [ClassicSimilarity], result of:
          0.018474855 = score(doc=1000,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.00798859 = product of:
          0.01597718 = sum of:
            0.01597718 = weight(_text_:online in 1000) [ClassicSimilarity], result of:
              0.01597718 = score(doc=1000,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.16765618 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.5 = coord(1/2)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.007559912 = weight(_text_:information in 1000) [ClassicSimilarity], result of:
          0.007559912 = score(doc=1000,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13714671 = fieldWeight in 1000, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
        0.12468062 = weight(_text_:2f in 1000) [ClassicSimilarity], result of:
          0.12468062 = score(doc=1000,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.46834838 = fieldWeight in 1000, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1000)
      0.5625 = coord(9/16)
    
    Abstract
    Vorgestellt wird die Konstruktion eines thematisch geordneten Thesaurus auf Basis der Sachschlagwörter der Gemeinsamen Normdatei (GND) unter Nutzung der darin enthaltenen DDC-Notationen. Oberste Ordnungsebene dieses Thesaurus werden die DDC-Sachgruppen der Deutschen Nationalbibliothek. Die Konstruktion des Thesaurus erfolgt regelbasiert unter der Nutzung von Linked Data Prinzipien in einem SPARQL Prozessor. Der Thesaurus dient der automatisierten Gewinnung von Metadaten aus wissenschaftlichen Publikationen mittels eines computerlinguistischen Extraktors. Hierzu werden digitale Volltexte verarbeitet. Dieser ermittelt die gefundenen Schlagwörter über Vergleich der Zeichenfolgen Benennungen im Thesaurus, ordnet die Treffer nach Relevanz im Text und gibt die zugeordne-ten Sachgruppen rangordnend zurück. Die grundlegende Annahme dabei ist, dass die gesuchte Sachgruppe unter den oberen Rängen zurückgegeben wird. In einem dreistufigen Verfahren wird die Leistungsfähigkeit des Verfahrens validiert. Hierzu wird zunächst anhand von Metadaten und Erkenntnissen einer Kurzautopsie ein Goldstandard aus Dokumenten erstellt, die im Online-Katalog der DNB abrufbar sind. Die Dokumente vertei-len sich über 14 der Sachgruppen mit einer Losgröße von jeweils 50 Dokumenten. Sämtliche Dokumente werden mit dem Extraktor erschlossen und die Ergebnisse der Kategorisierung do-kumentiert. Schließlich wird die sich daraus ergebende Retrievalleistung sowohl für eine harte (binäre) Kategorisierung als auch eine rangordnende Rückgabe der Sachgruppen beurteilt.
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
    Imprint
    Wien / Library and Information Studies : Universität
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.38
    0.37973848 = product of:
      0.8679737 = sum of:
        0.04987225 = product of:
          0.14961675 = sum of:
            0.14961675 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.14961675 = score(doc=862,freq=2.0), product of:
                0.26621342 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.031400457 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.07001776 = weight(_text_:2.0 in 862) [ClassicSimilarity], result of:
          0.07001776 = score(doc=862,freq=2.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            0.3844716 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.14961675 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.14961675 = score(doc=862,freq=2.0), product of:
            0.26621342 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.031400457 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4375 = coord(7/16)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Breuer, T.; Tavakolpoursaleh, N.; Schaer, P.; Hienert, D.; Schaible, J.; Castro, L.J.: Online Information Retrieval Evaluation using the STELLA Framework (2022) 0.04
    0.036828764 = product of:
      0.11785204 = sum of:
        0.022169823 = weight(_text_:web in 640) [ClassicSimilarity], result of:
          0.022169823 = score(doc=640,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.21634221 = fieldWeight in 640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.009586309 = product of:
          0.019172618 = sum of:
            0.019172618 = weight(_text_:online in 640) [ClassicSimilarity], result of:
              0.019172618 = score(doc=640,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.20118743 = fieldWeight in 640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=640)
          0.5 = coord(1/2)
        0.012829596 = weight(_text_:information in 640) [ClassicSimilarity], result of:
          0.012829596 = score(doc=640,freq=8.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.23274569 = fieldWeight in 640, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.02693603 = weight(_text_:retrieval in 640) [ClassicSimilarity], result of:
          0.02693603 = score(doc=640,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.2835858 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
        0.04633028 = weight(_text_:software in 640) [ClassicSimilarity], result of:
          0.04633028 = score(doc=640,freq=4.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.3719205 = fieldWeight in 640, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=640)
      0.3125 = coord(5/16)
    
    Abstract
    Involving users in early phases of software development has become a common strategy as it enables developers to consider user needs from the beginning. Once a system is in production, new opportunities to observe, evaluate and learn from users emerge as more information becomes available. Gathering information from users to continuously evaluate their behavior is a common practice for commercial software, while the Cranfield paradigm remains the preferred option for Information Retrieval (IR) and recommendation systems in the academic world. Here we introduce the Infrastructures for Living Labs STELLA project which aims to create an evaluation infrastructure allowing experimental systems to run along production web-based academic search systems with real users. STELLA combines user interactions and log files analyses to enable large-scale A/B experiments for academic search.
  4. Habermas, J.: Überlegungen und Hypothesen zu einem erneuten Strukturwandel der politischen Öffentlichkeit : ¬Ein neuer Strukturwandel der Öffentlichkeit? Hrsg.: M. Seeliger u. S. Sevignani (2021) 0.03
    0.03077462 = product of:
      0.24619696 = sum of:
        0.116696276 = weight(_text_:2.0 in 402) [ClassicSimilarity], result of:
          0.116696276 = score(doc=402,freq=2.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            0.640786 = fieldWeight in 402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.078125 = fieldNorm(doc=402)
        0.12950069 = weight(_text_:soziale in 402) [ClassicSimilarity], result of:
          0.12950069 = score(doc=402,freq=2.0), product of:
            0.19184545 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031400457 = queryNorm
            0.6750261 = fieldWeight in 402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.078125 = fieldNorm(doc=402)
      0.125 = coord(2/16)
    
    Footnote
    Vgl. dazu: El Ouassil, S.: Habermas und die Demokratie 2.0: Philosoph über Soziale Medien. Unter: https://www.spiegel.de/kultur/juergen-habermas-strukturwandel-der-oeffentlichkeit-in-der-2-0-version-a-2e683f52-3ccd-4985-a750-5e1a1823ad08.
  5. Grundlagen der Informationswissenschaft (2023) 0.03
    0.027839445 = product of:
      0.08908622 = sum of:
        0.011084911 = weight(_text_:web in 1043) [ClassicSimilarity], result of:
          0.011084911 = score(doc=1043,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.108171105 = fieldWeight in 1043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1043)
        0.03384992 = weight(_text_:benutzer in 1043) [ClassicSimilarity], result of:
          0.03384992 = score(doc=1043,freq=2.0), product of:
            0.17907447 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.031400457 = queryNorm
            0.18902707 = fieldWeight in 1043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1043)
        0.0047931545 = product of:
          0.009586309 = sum of:
            0.009586309 = weight(_text_:online in 1043) [ClassicSimilarity], result of:
              0.009586309 = score(doc=1043,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.100593716 = fieldWeight in 1043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1043)
          0.5 = coord(1/2)
        0.012422203 = weight(_text_:information in 1043) [ClassicSimilarity], result of:
          0.012422203 = score(doc=1043,freq=30.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.22535504 = fieldWeight in 1043, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1043)
        0.02693603 = weight(_text_:retrieval in 1043) [ClassicSimilarity], result of:
          0.02693603 = score(doc=1043,freq=16.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.2835858 = fieldWeight in 1043, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1043)
      0.3125 = coord(5/16)
    
    Abstract
    Die 7. Ausgabe der "Grundlagen der praktischen Information und Dokumentation" (Erstausgabe 1972) heißt jetzt: "Grundlagen der Informationswissenschaft". Der Bezug zur Praxis und zur Ausbildung bleibt erhalten, aber der neue Titel trägt dem Rechnung, dass die wissenschaftliche theoretische Absicherung für alle Bereiche von Wissen und Information, nicht nur in der Fachinformation, sondern auch in den Informationsdiensten des Internet immer wichtiger wird. Für die Grundlagen sind 73 Artikel in 6 Hauptkapiteln vorgesehen. Viele Themen werden zum ersten Mal behandelt, z.B. Information und Emotion, Informationelle Selbstbestimmung, Informationspathologien. Alle Beiträge sind neu verfasst.
    Content
    Enthält die Kapitel: Grußwort Hochschulverband Informationswissenschaft / Vorwort der Herausgeber / Rainer Kuhlen & Wolfgang Semar: A 1 Information - ein Konstrukt mit Folgen - 3 / Marlies Ockenfeld: A 2 Institutionalisierung der Informationswissenschaft und der IuD-Infrastruktur in Deutschland - 27 / Hans-Christoph Hobohm: A 3 Theorien in der Informationswissenschaft - 45 / Julia Maria Struß & Dirk Lewandowski: A 4 Methoden in der Informationswissenschaft - 57 / Ursula Georgy, Frauke Schade & Stefan Schmunk A 5 Ausbildung, Studium und Weiterbildung in der Informationswissenschaft - 71 / Robert Strötgen & René Schneider: A 6 Bibliotheken - 83 / Karin Schwarz: A 7 Archive - 93 / Hartwig Lüdtke: A 8 Museen - 103 / Barbara Müller-Heiden: A 9 Mediatheken - 111 / Ragna Seidler-de Alwis: A 10 Information Professionals - 117 / Axel Ermert: A 11 Normen und Standardisierung im Informationsbereich - 123 / Thomas Bähr: A 12 Langzeitarchivierung - 135 / Ulrich Reimer: B 1 Einführung in die Wissensorganisation - 147 / Gerd Knorz: B 2 Intellektuelles Indexieren - 159 / Klaus Lepsky: B 3 Automatisches Indexieren - 171 / Andreas Oskar Kempf: B 4 Thesauri - 183 / Michael Kleineberg: B 5 Klassifikation - 195 / Heidrun Wiesenmüller: B 6 Formale Erschließung - 207 / Jochen Fassbender: B 7 Register/Indexe - 219 / Udo Hahn: B 8 Abstracting - Textzusammenfassung - 233 / Rolf Assfalg: B 9 Metadaten - 245 / Heiko Rölke & Albert Weichselbraun: B 10 Ontologien und Linked Open Data - 257 / Isabelle Dorsch & Stefanie Haustein: B 11 Bibliometrie - 271 / Udo Hahn: B 12 Automatische Sprachverarbeitung - 281 /
    Hans-Christian Jetter: B 13 Informationsvisualisierung und Visual Analytics - 295 / Melanie Siegel: B 14 Maschinelle Übersetzung - 307 / Ulrich Herb: B 15 Verfahren der wissenschaftlichen Qualitäts-/ Relevanzsicherung / Evaluierung - 317 / Thomas Mandl: B 16 Text Mining und Data Mining - 327 / Heike Neuroth: B 17 Forschungsdaten - 339 / Isabella Peters: B 18 Folksonomies & Social Tagging - 351 / Christa Womser-Hacker: C 1 Informationswissenschaftliche Perspektiven des Information Retrieval - 365 / Norbert Fuhr: C 2 Modelle im Information Retrieval - 379 / Dirk Lewandowski: C 3 Suchmaschinen - 391 / David Elsweiler & Udo Kruschwitz: C 4 Interaktives Information Retrieval - 403 / Thomas Mandl & Sebastian Diem: C 5 Bild- und Video-Retrieval - 413 / Maximilian Eibl, Josef Haupt, Stefan Kahl, Stefan Taubert & Thomas Wilhelm-Stein: C 6 Audio- und Musik-Retrieval - 423 / Christa Womser-Hacker: C 7 Cross-Language Information Retrieval (CLIR) - 433 / Vivien Petras & Christa Womser-Hacker: C 8 Evaluation im Information Retrieval - 443 / Philipp Schaer: C 9 Sprachmodelle und neuronale Netze im Information Retrieval - 455 / Stefanie Elbeshausen: C 10 Modellierung von Benutzer*innen, Kontextualisierung, Personalisierung - 467 / Ragna Seidler-de Alwis: C 11 Informationsrecherche - 477 / Ulrich Reimer: C 12 Empfehlungssysteme - 485 / Elke Greifeneder & Kirsten Schlebbe: D 1 Information Behaviour - 499 / Nicola Döring: D 2 Computervermittelte Kommunikation - 511 / Hans-Christian Jetter: D 3 Mensch-Computer-Interaktion, Usability und User Experience - 525 / Gabriele Irle: D 4 Emotionen im Information Seeking - 535 /
    Kirsten Schlebbe & Elke Greifeneder: D 5 Information Need, Informationsbedarf und -bedürfnis - 543 / Dirk Lewandowski & Christa Womser-Hacker: D 6 Information Seeking Behaviour - 553 / Wolfgang Semar: D 7 Informations- und Wissensmanagement - 567 / Joachim Griesbaum: D 8 Informationskompetenz - 581 / Antje Michel, Maria Gäde, Anke Wittich & Inka Tappenbeck: D 9 Informationsdidaktik - 595 / Rainer Kuhlen: E 1 Informationsmarkt - 605 / Wolfgang Semar: E 2 Plattformökonomie - 621 / Tassilo Pellegrini & Jan Krone: E 3 Medienökonomie - 633 / Christoph Bläsi: E 4 Verlage in Wissenschaft und Bildung - 643 / Irina Sens, Alexander Pöche, Dana Vosberg, Judith Ludwig & Nicola Bieg: E 5 Lizenzierungsformen - 655 / Joachim Griesbaum: E 6 Online-Marketing - 667 / Frauke Schade & Ursula Georgy: E 7 Marketing für Informationseinrichtungen - 679 / Isabella Peters: E 8 Social Media & Social Web - 691 / Klaus Tochtermann & Anna Maria Höfler: E 9 Open Science - 703 / Ulrich Herb & Heinz Pampel: E 10 Open Access - 715 / Tobias Siebenlist: E 11 Open Data - 727 / Sigrid Fahrer & Tamara Heck: E 12 Open Educational Resources - 735 / Tobias Siebenlist: E 13 Open Government - 745 / Herrmann Rösch: F 1 Informationsethik - 755 / Bernard Bekavac: F 2 Informations-, Kommunikationstechnologien- und Webtechnologien - 773 / Peter Brettschneider: F 3 Urheberrecht - 789 / Johannes Caspar: F 4 Datenschutz und Informationsfreiheit - 803 / Norman Meuschke, Nicole Walger & Bela Gipp: F 5 Plagiat - 817 / Rainer Kuhlen: F 6 Informationspathologien - Desinformation - 829 / Glossar
  6. Bergman, O.; Israeli, T.; Whittaker, S.: Factors hindering shared files retrieval (2020) 0.03
    0.02502045 = product of:
      0.1000818 = sum of:
        0.01195327 = weight(_text_:information in 5843) [ClassicSimilarity], result of:
          0.01195327 = score(doc=5843,freq=10.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.21684799 = fieldWeight in 5843, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.050192326 = weight(_text_:retrieval in 5843) [ClassicSimilarity], result of:
          0.050192326 = score(doc=5843,freq=20.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.5284309 = fieldWeight in 5843, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.027300376 = weight(_text_:software in 5843) [ClassicSimilarity], result of:
          0.027300376 = score(doc=5843,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.21915624 = fieldWeight in 5843, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5843)
        0.010635821 = product of:
          0.021271642 = sum of:
            0.021271642 = weight(_text_:22 in 5843) [ClassicSimilarity], result of:
              0.021271642 = score(doc=5843,freq=2.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.19345059 = fieldWeight in 5843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5843)
          0.5 = coord(1/2)
      0.25 = coord(4/16)
    
    Abstract
    Purpose Personal information management (PIM) is an activity in which people store information items in order to retrieve them later. The purpose of this paper is to test and quantify the effect of factors related to collection size, file properties and workload on file retrieval success and efficiency. Design/methodology/approach In the study, 289 participants retrieved 1,557 of their shared files in a naturalistic setting. The study used specially developed software designed to collect shared files' names and present them as targets for the retrieval task. The dependent variables were retrieval success, retrieval time and misstep/s. Findings Various factors compromise shared files retrieval including: collection size (large number of files), file properties (multiple versions, size of team sharing the file, time since most recent retrieval and folder depth) and workload (daily e-mails sent and received). The authors discuss theoretical reasons for these negative effects and suggest possible ways to overcome them. Originality/value Retrieval is the main reason people manage personal information. It is essential for retrieval to be successful and efficient, as information cannot be used unless it can be re-accessed. Prior PIM research has assumed that factors related to collection size, file properties and workload affect file retrieval. However, this is the first study to systematically quantify the negative effects of these factors. As each of these factors is expected to be exacerbated in the future, this study is a necessary first step toward addressing these problems.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 72(2020) no.1, S.130-147
  7. Hong, H.; Ye, Q.: Crowd characteristics and crowd wisdom : evidence from an online investment community (2020) 0.02
    0.02453646 = product of:
      0.09814584 = sum of:
        0.018474855 = weight(_text_:web in 5763) [ClassicSimilarity], result of:
          0.018474855 = score(doc=5763,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 5763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5763)
        0.058348138 = weight(_text_:2.0 in 5763) [ClassicSimilarity], result of:
          0.058348138 = score(doc=5763,freq=2.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            0.320393 = fieldWeight in 5763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5763)
        0.01597718 = product of:
          0.03195436 = sum of:
            0.03195436 = weight(_text_:online in 5763) [ClassicSimilarity], result of:
              0.03195436 = score(doc=5763,freq=8.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.33531237 = fieldWeight in 5763, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5763)
          0.5 = coord(1/2)
        0.005345665 = weight(_text_:information in 5763) [ClassicSimilarity], result of:
          0.005345665 = score(doc=5763,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.09697737 = fieldWeight in 5763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5763)
      0.25 = coord(4/16)
    
    Abstract
    Fueled by the explosive growth of Web 2.0 and social media, online investment communities have become a popular venue for individual investors to interact with each other. Investor opinions extracted from online investment communities capture "crowd wisdom" and have begun to play an important role in financial markets. Existing research confirms the importance of crowd wisdom in stock predictions, but fails to investigate factors influencing crowd performance (that is, crowd prediction accuracy). In order to help improve crowd performance, our research strives to investigate the impact of crowd characteristics on crowd performance. We conduct an empirical study using a large data set collected from a popular online investment community, StockTwits. Our findings show that experience diversity, participant independence, and network decentralization are all positively related to crowd performance. Furthermore, crowd size moderates the influence of crowd characteristics on crowd performance. From a theoretical perspective, our work enriches extant literature by empirically testing the relationship between crowd characteristics and crowd performance. From a practical perspective, our findings help investors better evaluate social sensors embedded in user-generated stock predictions, based upon which they can make better investment decisions.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.4, S.423-435
  8. Alipour, O.; Soheili, F.; Khasseh, A.A.: ¬A co-word analysis of global research on knowledge organization: 1900-2019 (2022) 0.02
    0.024369938 = product of:
      0.09747975 = sum of:
        0.020901911 = weight(_text_:web in 1106) [ClassicSimilarity], result of:
          0.020901911 = score(doc=1106,freq=4.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.2039694 = fieldWeight in 1106, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
        0.012095859 = weight(_text_:information in 1106) [ClassicSimilarity], result of:
          0.012095859 = score(doc=1106,freq=16.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.21943474 = fieldWeight in 1106, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
        0.03359513 = weight(_text_:retrieval in 1106) [ClassicSimilarity], result of:
          0.03359513 = score(doc=1106,freq=14.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.3536936 = fieldWeight in 1106, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
        0.030886853 = weight(_text_:software in 1106) [ClassicSimilarity], result of:
          0.030886853 = score(doc=1106,freq=4.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.24794699 = fieldWeight in 1106, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=1106)
      0.25 = coord(4/16)
    
    Abstract
    The study's objective is to analyze the structure of knowledge organization studies conducted worldwide. This applied research has been conducted with a scientometrics approach using the co-word analysis. The research records consisted of all articles published in the journals of Knowledge Organization and Cataloging & Classification Quarterly and keywords related to the field of knowledge organization indexed in Web of Science from 1900 to 2019, in which 17,950 records were analyzed entirely with plain text format. The total number of keywords was 25,480, which was reduced to 12,478 keywords after modifications and removal of duplicates. Then, 115 keywords with a frequency of at least 18 were included in the final analysis, and finally, the co-word network was drawn. BibExcel, UCINET, VOSviewer, and SPSS software were used to draw matrices, analyze co-word networks, and draw dendrograms. Furthermore, strategic diagrams were drawn using Excel software. The keywords "information retrieval," "classification," and "ontology" are among the most frequently used keywords in knowledge organization articles. Findings revealed that "Ontology*Semantic Web", "Digital Library*Information Retrieval" and "Indexing*Information Retrieval" are highly frequent co-word pairs, respectively. The results of hierarchical clustering indicated that the global research on knowledge organization consists of eight main thematic clusters; the largest is specified for the topic of "classification, indexing, and information retrieval." The smallest clusters deal with the topics of "data processing" and "theoretical concepts of information and knowledge organization" respectively. Cluster 1 (cataloging standards and knowledge organization) has the highest density, while Cluster 5 (classification, indexing, and information retrieval) has the highest centrality. According to the findings of this research, the keyword "information retrieval" has played a significant role in knowledge organization studies, both as a keyword and co-word pair. In the co-word section, there is a type of related or general topic relationship between co-word pairs. Results indicated that information retrieval is one of the main topics in knowledge organization, while the theoretical concepts of knowledge organization have been neglected. In general, the co-word structure of knowledge organization research indicates the multiplicity of global concepts and topics studied in this field globally.
  9. Scherschel, F.A.: Corona-Tracking : SAP und Deutsche Telekom veröffentlichen erste Details zur Tracing- und Warn-App (2020) 0.02
    0.023612693 = product of:
      0.12593436 = sum of:
        0.07001776 = weight(_text_:2.0 in 5857) [ClassicSimilarity], result of:
          0.07001776 = score(doc=5857,freq=2.0), product of:
            0.18211427 = queryWeight, product of:
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.031400457 = queryNorm
            0.3844716 = fieldWeight in 5857, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.799733 = idf(docFreq=363, maxDocs=44218)
              0.046875 = fieldNorm(doc=5857)
        0.009586309 = product of:
          0.019172618 = sum of:
            0.019172618 = weight(_text_:online in 5857) [ClassicSimilarity], result of:
              0.019172618 = score(doc=5857,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.20118743 = fieldWeight in 5857, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5857)
          0.5 = coord(1/2)
        0.04633028 = weight(_text_:software in 5857) [ClassicSimilarity], result of:
          0.04633028 = score(doc=5857,freq=4.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.3719205 = fieldWeight in 5857, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=5857)
      0.1875 = coord(3/16)
    
    Abstract
    Im Auftrag der Bundesregierung entwickeln SAP und die Deutsche Telekom momentan eine Contact-Tracing-App im Rahmen von Apples und Googles Exposure-Notification-Framework. Die sogenannte Corona-Warn-App und alle von ihr genutzten Serverkomponenten sollen im Vorfeld der für kommenden Monat geplanten Veröffentlichung der App unter der Apache-2.0-Lizenz als Open-Source-Software auf GitHub bereitgestellt werden. Nun haben die Projektverantwortlichen erste Dokumente dazu herausgegeben, wie die App später funktionieren soll: https://github.com/corona-warn-app/cwa-documentation.
    Content
    Vgl.: https://www.heise.de/-4721652. Vgl. auch den Beitrag: "Corona-Warnung per App: Fragen und Antworten zur geplanten Tracing-App" unter: https://www.verbraucherzentrale.de/wissen/digitale-welt/apps-und-software/coronawarnung-per-app-fragen-und-antworten-zur-geplanten-tracingapp-47466
    Series
    Heise Online
  10. Soshnikov, D.: ROMEO: an ontology-based multi-agent architecture for online information retrieval (2021) 0.02
    0.02326764 = product of:
      0.09307056 = sum of:
        0.029559765 = weight(_text_:web in 249) [ClassicSimilarity], result of:
          0.029559765 = score(doc=249,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.2884563 = fieldWeight in 249, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.012781745 = product of:
          0.02556349 = sum of:
            0.02556349 = weight(_text_:online in 249) [ClassicSimilarity], result of:
              0.02556349 = score(doc=249,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.2682499 = fieldWeight in 249, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0625 = fieldNorm(doc=249)
          0.5 = coord(1/2)
        0.014814342 = weight(_text_:information in 249) [ClassicSimilarity], result of:
          0.014814342 = score(doc=249,freq=6.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.2687516 = fieldWeight in 249, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
        0.035914708 = weight(_text_:retrieval in 249) [ClassicSimilarity], result of:
          0.035914708 = score(doc=249,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.37811437 = fieldWeight in 249, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=249)
      0.25 = coord(4/16)
    
    Abstract
    This paper describes an approach to path-finding in the intelligent graphs, with vertices being intelligent agents. A possible implementation of this approach is described, based on logical inference in distributed frame hierarchy. Presented approach can be used for implementing distributed intelligent information systems that include automatic navigation and path generation in hypertext, which can be used, for example in distance education, as well as for organizing intelligent web catalogues with flexible ontology-based information retrieval.
  11. Womser-Hacker, C.: Informationswissenschaftliche Perspektiven des Information Retrieval (2023) 0.02
    0.020585932 = product of:
      0.10979164 = sum of:
        0.079785034 = weight(_text_:benutzer in 798) [ClassicSimilarity], result of:
          0.079785034 = score(doc=798,freq=4.0), product of:
            0.17907447 = queryWeight, product of:
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.031400457 = queryNorm
            0.44554108 = fieldWeight in 798, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7029257 = idf(docFreq=400, maxDocs=44218)
              0.0390625 = fieldNorm(doc=798)
        0.007559912 = weight(_text_:information in 798) [ClassicSimilarity], result of:
          0.007559912 = score(doc=798,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13714671 = fieldWeight in 798, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=798)
        0.022446692 = weight(_text_:retrieval in 798) [ClassicSimilarity], result of:
          0.022446692 = score(doc=798,freq=4.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.23632148 = fieldWeight in 798, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=798)
      0.1875 = coord(3/16)
    
    Abstract
    Mit Information Retrieval (IR) sind in Forschung und Entwicklung in unterschiedlicher Breite und aus verschiedenen Perspektiven mehrere Disziplinen befasst. Die verschiedenen Ausrichtungen sind wichtig, da nur in ihrer Verknüpfung eine Gesamtschau des IR vermittelt werden kann. Die Informatik verfolgt einen stärker systemgetriebenen, technologischen Ansatz des IR und stellt Algorithmen und Implementationen in den Vordergrund, während für die Informationswissenschaft die Benutzer*innen in ihren vielschichtigen Kontexten den Schwerpunkt bilden. Deren Eigenschaften (fachlicher Hintergrund, Domänenzugehörigkeit, Expertise etc.) und Zielsetzungen, die durch das IR verfolgt werden, spielen im Interaktionsprozess zwischen Mensch und System eine zentrale Rolle. Auch wird intensiv der Frage nachgegangen, wie sich Benutzer*innen in diesen Prozessen verhalten und aus welchen Gründen sie verschiedene Systeme in Anspruch nehmen. Da ein Großteil des heutigen Wissens nach wie vor in Texten repräsentiert ist, ist eine weitere Disziplin - nämlich die Computerlinguistik/Sprachtechnologie für das IR von Bedeutung. Zusätzlich kommen aber auch visuelle und auditive Wissensobjekte immer stärker zum Tragen und werden aufgrund ihrer anwachsenden Menge immer wichtiger für das IR. Ein neues Fachgebiet ist die Data Science, die auf altbekannten Konzepten aus Statistik und Wahrscheinlichkeitsrechnung aufsetzt, auf den Daten operiert und auch traditionelles IR-Wissen für die Zusammenführung von strukturierten Fakten und unstrukturierten Texten nutzt. Hier soll die informationswissenschaftliche Perspektive im Vordergrund stehen.
  12. Lee, H.S.; Arnott Smith, C.: ¬A comparative mixed methods study on health information seeking among US-born/US-dwelling, Korean-born/US-dwelling, and Korean-born/Korean-dwelling mothers (2022) 0.02
    0.020182706 = product of:
      0.080730826 = sum of:
        0.03405392 = weight(_text_:wide in 614) [ClassicSimilarity], result of:
          0.03405392 = score(doc=614,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.24476713 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.018474855 = weight(_text_:web in 614) [ClassicSimilarity], result of:
          0.018474855 = score(doc=614,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 614, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
        0.011297573 = product of:
          0.022595147 = sum of:
            0.022595147 = weight(_text_:online in 614) [ClassicSimilarity], result of:
              0.022595147 = score(doc=614,freq=4.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.23710167 = fieldWeight in 614, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=614)
          0.5 = coord(1/2)
        0.016904477 = weight(_text_:information in 614) [ClassicSimilarity], result of:
          0.016904477 = score(doc=614,freq=20.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.30666938 = fieldWeight in 614, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=614)
      0.25 = coord(4/16)
    
    Abstract
    More knowledge and a better understanding of health information seeking are necessary, especially in these unprecedented times due to the COVID-19 pandemic. Using Sonnenwald's theoretical concept of information horizons, this study aimed to uncover patterns in mothers' source preferences related to their children's health. Online surveys were completed by 851 mothers (255 US-born/US-dwelling, 300 Korean-born/US-dwelling, and 296 Korean-born/Korean-dwelling), and supplementary in-depth interviews with 24 mothers were conducted and analyzed. Results indicate that there were remarkable differences between the mothers' information source preference and their actual source use. Moreover, there were many similarities between the two Korean-born groups concerning health information-seeking behavior. For instance, those two groups sought health information more frequently than US-born/US-dwelling mothers. Their sources frequently included blogs or online forums as well as friends with children, whereas US-born/US-dwelling mothers frequently used doctors or nurses as information sources. Mothers in the two Korean-born samples preferred the World Wide Web most as their health information source, while the US-born/US-dwelling mothers preferred doctors the most. Based on these findings, information professionals should guide mothers of specific ethnicities and nationalities to trustworthy sources considering both their usage and preferences.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.7, S.929-943
  13. Smith, A.: Simple Knowledge Organization System (SKOS) (2022) 0.02
    0.01923853 = product of:
      0.10260549 = sum of:
        0.057791423 = weight(_text_:wide in 1094) [ClassicSimilarity], result of:
          0.057791423 = score(doc=1094,freq=4.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.4153836 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.038399264 = weight(_text_:web in 1094) [ClassicSimilarity], result of:
          0.038399264 = score(doc=1094,freq=6.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.37471575 = fieldWeight in 1094, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
        0.006414798 = weight(_text_:information in 1094) [ClassicSimilarity], result of:
          0.006414798 = score(doc=1094,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.116372846 = fieldWeight in 1094, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1094)
      0.1875 = coord(3/16)
    
    Abstract
    SKOS (Simple Knowledge Organization System) is a recommendation from the World Wide Web Consortium (W3C) for representing controlled vocabularies, taxonomies, thesauri, classifications, and similar systems for organizing and indexing information as linked data elements in the Semantic Web, using the Resource Description Framework (RDF). The SKOS data model is centered on "concepts", which can have preferred and alternate labels in any language as well as other metadata, and which are identified by addresses on the World Wide Web (URIs). Concepts are grouped into hierarchies through "broader" and "narrower" relations, with "top concepts" at the broadest conceptual level. Concepts are also organized into "concept schemes", also identified by URIs. Other relations, mappings, and groupings are also supported. This article discusses the history of the development of SKOS and provides notes on adoption, uses, and limitations.
  14. Ortega, J.L.: Classification and analysis of PubPeer comments : how a web journal club is used (2022) 0.02
    0.018954787 = product of:
      0.1010922 = sum of:
        0.022169823 = weight(_text_:web in 544) [ClassicSimilarity], result of:
          0.022169823 = score(doc=544,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.21634221 = fieldWeight in 544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=544)
        0.06985048 = sum of:
          0.019172618 = weight(_text_:online in 544) [ClassicSimilarity], result of:
            0.019172618 = score(doc=544,freq=2.0), product of:
              0.09529729 = queryWeight, product of:
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.031400457 = queryNorm
              0.20118743 = fieldWeight in 544, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.0349014 = idf(docFreq=5778, maxDocs=44218)
                0.046875 = fieldNorm(doc=544)
          0.050677866 = weight(_text_:publizieren in 544) [ClassicSimilarity], result of:
            0.050677866 = score(doc=544,freq=2.0), product of:
              0.15493481 = queryWeight, product of:
                4.934158 = idf(docFreq=864, maxDocs=44218)
                0.031400457 = queryNorm
              0.32709154 = fieldWeight in 544, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.934158 = idf(docFreq=864, maxDocs=44218)
                0.046875 = fieldNorm(doc=544)
        0.009071894 = weight(_text_:information in 544) [ClassicSimilarity], result of:
          0.009071894 = score(doc=544,freq=4.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.16457605 = fieldWeight in 544, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=544)
      0.1875 = coord(3/16)
    
    Abstract
    This study explores the use of PubPeer by the scholarly community, to understand the issues discussed in an online journal club, the disciplines most commented on, and the characteristics of the most prolific users. A sample of 39,985 posts about 24,779 publications were extracted from PubPeer in 2019 and 2020. These comments were divided into seven categories according to their degree of seriousness (Positive review, Critical review, Lack of information, Honest errors, Methodological flaws, Publishing fraud, and Manipulation). The results show that more than two-thirds of comments are posted to report some type of misconduct, mainly about image manipulation. These comments generate most discussion and take longer to be posted. By discipline, Health Sciences and Life Sciences are the most discussed research areas. The results also reveal "super commenters," users who access the platform to systematically review publications. The study ends by discussing how various disciplines use the site for different purposes.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.5, S.655-670
    Theme
    Elektronisches Publizieren
  15. Peters, I.: Folksonomies & Social Tagging (2023) 0.02
    0.01874223 = product of:
      0.09995856 = sum of:
        0.04767549 = weight(_text_:wide in 796) [ClassicSimilarity], result of:
          0.04767549 = score(doc=796,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.342674 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.04479914 = weight(_text_:web in 796) [ClassicSimilarity], result of:
          0.04479914 = score(doc=796,freq=6.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.43716836 = fieldWeight in 796, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
        0.0074839313 = weight(_text_:information in 796) [ClassicSimilarity], result of:
          0.0074839313 = score(doc=796,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.13576832 = fieldWeight in 796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=796)
      0.1875 = coord(3/16)
    
    Abstract
    Die Erforschung und der Einsatz von Folksonomies und Social Tagging als nutzerzentrierte Formen der Inhaltserschließung und Wissensrepräsentation haben in den 10 Jahren ab ca. 2005 ihren Höhenpunkt erfahren. Motiviert wurde dies durch die Entwicklung und Verbreitung des Social Web und der wachsenden Nutzung von Social-Media-Plattformen (s. Kapitel E 8 Social Media und Social Web). Beides führte zu einem rasanten Anstieg der im oder über das World Wide Web auffindbaren Menge an potenzieller Information und generierte eine große Nachfrage nach skalierbaren Methoden der Inhaltserschließung.
  16. Pee, L.G.; Pan, S.L.: Social informatics of information value cocreation : a case study of xiaomi's online user community (2020) 0.02
    0.018441368 = product of:
      0.07376547 = sum of:
        0.018474855 = weight(_text_:web in 5766) [ClassicSimilarity], result of:
          0.018474855 = score(doc=5766,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.18028519 = fieldWeight in 5766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5766)
        0.00798859 = product of:
          0.01597718 = sum of:
            0.01597718 = weight(_text_:online in 5766) [ClassicSimilarity], result of:
              0.01597718 = score(doc=5766,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.16765618 = fieldWeight in 5766, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5766)
          0.5 = coord(1/2)
        0.020001646 = weight(_text_:information in 5766) [ClassicSimilarity], result of:
          0.020001646 = score(doc=5766,freq=28.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.3628561 = fieldWeight in 5766, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5766)
        0.027300376 = weight(_text_:software in 5766) [ClassicSimilarity], result of:
          0.027300376 = score(doc=5766,freq=2.0), product of:
            0.124570385 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031400457 = queryNorm
            0.21915624 = fieldWeight in 5766, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5766)
      0.25 = coord(4/16)
    
    Abstract
    The perennial issue of information value creation needs to be understood in the contemporary era of a more networked user environment enabled by information technology (IT). This mixed-methods study investigates information value cocreation from the social informatics perspective to surface sociotechnical implications for IT design and use, since cocreation is inherently social and technology-mediated. Specifically, the cocreation of software as an information-intensive product is examined. Data on the cocreation of Xiaomi's MIUI firmware were collected from two sources: 49 interviews of staff and user participants and web crawling of the cocreation platform. They were analyzed with interpretive analysis, topic modeling, and social network analysis for triangulation. Findings indicate three sociotechnical information practices co-constituted by information, IT, people, and their activities. Each practice is instrumental in rapidly and continuously converting external information into cocreated information value. The adsorption information practice attracts new and diverse external information; the absorption practice integrates external and internal information rapidly by involving users; the desorption practice allows rapid adoption of the cocreated product so that information value can be realized and demonstrated for further cocreation. Critically analyzing these practices reveals unanticipated or paradoxical issues affecting the design and use of common cocreation technology such as discussion forums.
    Source
    Journal of the Association for Information Science and Technology. 71(2020) no.4, S.409-422
  17. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.02
    0.017813321 = product of:
      0.071253285 = sum of:
        0.03405392 = weight(_text_:wide in 1012) [ClassicSimilarity], result of:
          0.03405392 = score(doc=1012,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.24476713 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.01069133 = weight(_text_:information in 1012) [ClassicSimilarity], result of:
          0.01069133 = score(doc=1012,freq=8.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.19395474 = fieldWeight in 1012, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.015872208 = weight(_text_:retrieval in 1012) [ClassicSimilarity], result of:
          0.015872208 = score(doc=1012,freq=2.0), product of:
            0.09498371 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.031400457 = queryNorm
            0.16710453 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.010635821 = product of:
          0.021271642 = sum of:
            0.021271642 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.021271642 = score(doc=1012,freq=2.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.25 = coord(4/16)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.759-774
  18. Huber, W.: Menschen, Götter und Maschinen : eine Ethik der Digitalisierung (2022) 0.02
    0.017591868 = product of:
      0.09382329 = sum of:
        0.027243135 = weight(_text_:wide in 752) [ClassicSimilarity], result of:
          0.027243135 = score(doc=752,freq=2.0), product of:
            0.13912784 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.031400457 = queryNorm
            0.1958137 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.0147798825 = weight(_text_:web in 752) [ClassicSimilarity], result of:
          0.0147798825 = score(doc=752,freq=2.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.14422815 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
        0.051800273 = weight(_text_:soziale in 752) [ClassicSimilarity], result of:
          0.051800273 = score(doc=752,freq=2.0), product of:
            0.19184545 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031400457 = queryNorm
            0.27001044 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.03125 = fieldNorm(doc=752)
      0.1875 = coord(3/16)
    
    Abstract
    Die Digitalisierung hat unsere Privatsphäre ausgehöhlt, die Öffentlichkeit in antagonistische Teilöffentlichkeiten zerlegt, Hemmschwellen gesenkt und die Grenze zwischen Wahrheit und Lüge aufgeweicht. Wolfgang Huber beschreibt klar und pointiert diese technische und soziale Entwicklung. Er zeigt, wie sich konsensfähige ethische Prinzipien für den Umgang mit digitaler Intelligenz finden lassen und umgesetzt werden können - von der Gesetzgebung, von digitalen Anbietern und von allen Nutzern. Die Haltungen zur Digitalisierung schwanken zwischen Euphorie und Apokalypse: Die einen erwarten die Schaffung eines neuen Menschen, der sich selbst zum Gott macht. Andere befürchten den Verlust von Freiheit und Menschenwürde. Wolfgang Huber wirft demgegenüber einen realistischen Blick auf den technischen Umbruch. Das beginnt bei der Sprache: Sind die "sozialen Medien" wirklich sozial? Fährt ein mit digitaler Intelligenz ausgestattetes Auto "autonom" oder nicht eher automatisiert? Sind Algorithmen, die durch Mustererkennung lernen, deshalb "intelligent"? Eine überbordende Sprache lässt uns allzu oft vergessen, dass noch so leistungsstarke Rechner nur Maschinen sind, die von Menschen entwickelt und bedient werden. Notfalls muss man ihnen den Stecker ziehen. Das wunderbar anschaulich geschriebene Buch macht auf der Höhe der aktuellen ethischen Diskussionen bewusst, dass wir uns der Digitalisierung nicht ausliefern dürfen, sondern sie selbstbestimmt und verantwortlich gestalten können. 80. Geburtstag von Wolfgang Huber am 12.8.2022 Ein Heilmittel gegen allzu euphorische und apokalyptische Erwartungen an die Digitalisierung Wie wir unsere Haltung zur Digitalisierung ändern können, um uns nicht der Technik auszuliefern.
    Content
    Vorwort -- 1. Das digitale Zeitalter -- Zeitenwende -- Die Vorherrschaft des Buchdrucks geht zu Ende -- Wann beginnt das digitale Zeitalter? -- 2. Zwischen Euphorie und Apokalypse -- Digitalisierung. Einfach. Machen -- Euphorie -- Apokalypse -- Verantwortungsethik -- Der Mensch als Subjekt der Ethik -- Verantwortung als Prinzip -- 3. Digitalisierter Alltag in einer globalisierten Welt -- Vom World Wide Web zum Internet der Dinge -- Mobiles Internet und digitale Bildung -- Digitale Plattformen und ihre Strategien -- Big Data und informationelle Selbstbestimmung -- 4. Grenzüberschreitungen -- Die Erosion des Privaten -- Die Deformation des Öffentlichen -- Die Senkung von Hemmschwellen -- Das Verschwinden der Wirklichkeit -- Die Wahrheit in der Infosphäre -- 5. Die Zukunft der Arbeit -- Industrielle Revolutionen -- Arbeit 4.0 -- Ethik 4.0 -- 6. Digitale Intelligenz -- Können Computer dichten? -- Stärker als der Mensch? -- Maschinelles Lernen -- Ein bleibender Unterschied -- Ethische Prinzipien für den Umgang mit digitaler Intelligenz -- Medizin als Beispiel -- 7. Die Würde des Menschen im digitalen Zeitalter -- Kränkungen oder Revolutionen -- Transhumanismus und Posthumanismus -- Gibt es Empathie ohne Menschen? -- Wer ist autonom: Mensch oder Maschine? -- Humanismus der Verantwortung -- 8. Die Zukunft des Homo sapiens -- Vergöttlichung des Menschen -- Homo deus -- Gott und Mensch im digitalen Zeitalter -- Veränderung der Menschheit -- Literatur -- Personenregister.
  19. Michel, A.: Informationsdidaktik für verschiedene Wissenskulturen (2020) 0.02
    0.017569033 = product of:
      0.09370151 = sum of:
        0.009586309 = product of:
          0.019172618 = sum of:
            0.019172618 = weight(_text_:online in 5735) [ClassicSimilarity], result of:
              0.019172618 = score(doc=5735,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.20118743 = fieldWeight in 5735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5735)
          0.5 = coord(1/2)
        0.006414798 = weight(_text_:information in 5735) [ClassicSimilarity], result of:
          0.006414798 = score(doc=5735,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.116372846 = fieldWeight in 5735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5735)
        0.07770041 = weight(_text_:soziale in 5735) [ClassicSimilarity], result of:
          0.07770041 = score(doc=5735,freq=2.0), product of:
            0.19184545 = queryWeight, product of:
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.031400457 = queryNorm
            0.40501565 = fieldWeight in 5735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1096387 = idf(docFreq=266, maxDocs=44218)
              0.046875 = fieldNorm(doc=5735)
      0.1875 = coord(3/16)
    
    Abstract
    In den vergangenen Monaten sind in Password Online eine ganze Reihe von Beiträgen erschienen, die sich mit dem Thema Informationskompetenz auseinandergesetzt haben. Sie alle hatten einen unterschiedlichen Fokus, es einte sie jedoch eine eher kritische Perspektive auf ein universelles Kompetenzset, aus dem sich "Informationskompetenz" ergibt. Spannend ist insbesondere im Kontext des aktuellen, lebhaften Diskurses zu Fake News, dass einige Autor*innen explizit soziale und emotionale Faktoren als relevante Kriterien für den Umgang mit Information betonen. (Mit diesem Text und dem sich anschließenden Beitrag von Inka Tappenbeck möchten wir auf die "wissenskulturelle Praxis" als einen weiteren Faktor genauer eingehen, der prägt, was in unterschiedlichen Kontexten als Informationskompetenz zu verstehen ist).
  20. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.02
    0.01730601 = product of:
      0.06922404 = sum of:
        0.045253962 = weight(_text_:web in 992) [ClassicSimilarity], result of:
          0.045253962 = score(doc=992,freq=12.0), product of:
            0.10247572 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031400457 = queryNorm
            0.4416067 = fieldWeight in 992, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.00798859 = product of:
          0.01597718 = sum of:
            0.01597718 = weight(_text_:online in 992) [ClassicSimilarity], result of:
              0.01597718 = score(doc=992,freq=2.0), product of:
                0.09529729 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031400457 = queryNorm
                0.16765618 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
        0.005345665 = weight(_text_:information in 992) [ClassicSimilarity], result of:
          0.005345665 = score(doc=992,freq=2.0), product of:
            0.055122808 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.031400457 = queryNorm
            0.09697737 = fieldWeight in 992, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=992)
        0.010635821 = product of:
          0.021271642 = sum of:
            0.021271642 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
              0.021271642 = score(doc=992,freq=2.0), product of:
                0.10995905 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031400457 = queryNorm
                0.19345059 = fieldWeight in 992, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=992)
          0.5 = coord(1/2)
      0.25 = coord(4/16)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
    Object
    Web of Science
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.7, S.745-758

Languages

  • e 685
  • d 196
  • pt 4
  • m 2
  • sp 1
  • More… Less…

Types

  • a 809
  • el 135
  • m 32
  • p 8
  • s 7
  • x 2
  • A 1
  • EL 1
  • More… Less…

Themes

Subjects

Classifications