Search (14 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Biselli, A.: Unter Generalverdacht durch Algorithmen (2014) 0.05
    0.046397857 = product of:
      0.092795715 = sum of:
        0.092795715 = product of:
          0.18559143 = sum of:
            0.18559143 = weight(_text_:news in 809) [ClassicSimilarity], result of:
              0.18559143 = score(doc=809,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.6949563 = fieldWeight in 809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.09375 = fieldNorm(doc=809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    http://www.golem.de/news/textanalyse-unter-generalverdacht-durch-algorithmen-1402-104637.html
  2. Janssen, J.-K.: ChatGPT-Klon läuft lokal auf jedem Rechner : Alpaca/LLaMA ausprobiert (2023) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 927) [ClassicSimilarity], result of:
              0.15465952 = score(doc=927,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 927, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=927)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.heise.de/news/c-t-3003-ChatGPT-Klon-laeuft-lokal-auf-jedem-Rechner-Alpaca-LLaMA-ausprobiert-8004159.html?view=print
  3. Hahn, S.: DarkBERT ist mit Daten aus dem Darknet trainiert : ChatGPTs dunkler Bruder? (2023) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 979) [ClassicSimilarity], result of:
              0.15465952 = score(doc=979,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.heise.de/news/DarkBERT-ist-mit-Daten-aus-dem-Darknet-trainiert-ChatGPTs-dunkler-Bruder-9060809.html?view=print
  4. Was ist GPT-3 und spricht das Modell Deutsch? (2022) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 868) [ClassicSimilarity], result of:
              0.12372762 = score(doc=868,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    GPT-3 ist ein Sprachverarbeitungsmodell der amerikanischen Non-Profit-Organisation OpenAI. Es verwendet Deep-Learning um Texte zu erstellen, zusammenzufassen, zu vereinfachen oder zu übersetzen.  GPT-3 macht seit der Veröffentlichung eines Forschungspapiers wiederholt Schlagzeilen. Mehrere Zeitungen und Online-Publikationen testeten die Fähigkeiten und veröffentlichten ganze Artikel - verfasst vom KI-Modell - darunter The Guardian und Hacker News. Es wird von Journalisten rund um den Globus wahlweise als "Sprachtalent", "allgemeine künstliche Intelligenz" oder "eloquent" bezeichnet. Grund genug, die Fähigkeiten des künstlichen Sprachgenies unter die Lupe zu nehmen.
  5. Bischoff, M.: Wie eine KI lernt, sich selbst zu erklären (2023) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 956) [ClassicSimilarity], result of:
              0.12372762 = score(doc=956,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 956, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.spektrum.de/news/sprachmodelle-auf-dem-weg-zu-einer-erklaerbaren-ki/2132727#Echobox=1682669561?utm_source=pocket-newtab-global-de-DE
  6. Holland, M.: Erstes wissenschaftliches Buch eines Algorithmus' veröffentlicht (2019) 0.03
    0.027065417 = product of:
      0.054130834 = sum of:
        0.054130834 = product of:
          0.10826167 = sum of:
            0.10826167 = weight(_text_:news in 5227) [ClassicSimilarity], result of:
              0.10826167 = score(doc=5227,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40539116 = fieldWeight in 5227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Heise Online: News
  7. Weiß, E.-M.: ChatGPT soll es richten : Microsoft baut KI in Suchmaschine Bing ein (2023) 0.03
    0.027065417 = product of:
      0.054130834 = sum of:
        0.054130834 = product of:
          0.10826167 = sum of:
            0.10826167 = weight(_text_:news in 866) [ClassicSimilarity], result of:
              0.10826167 = score(doc=866,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40539116 = fieldWeight in 866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=866)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.heise.de/news/ChatGPT-soll-es-richten-Microsoft-baut-KI-in-Suchmaschine-Bing-ein-7447837.html
  8. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020708349 = product of:
      0.041416697 = sum of:
        0.041416697 = product of:
          0.082833394 = sum of:
            0.082833394 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.082833394 = score(doc=4888,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  9. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.02
    0.015465952 = product of:
      0.030931905 = sum of:
        0.030931905 = product of:
          0.06186381 = sum of:
            0.06186381 = weight(_text_:news in 872) [ClassicSimilarity], result of:
              0.06186381 = score(doc=872,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.2316521 = fieldWeight in 872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  10. Donath, A.: Nutzungsverbote für ChatGPT (2023) 0.02
    0.015465952 = product of:
      0.030931905 = sum of:
        0.030931905 = product of:
          0.06186381 = sum of:
            0.06186381 = weight(_text_:news in 877) [ClassicSimilarity], result of:
              0.06186381 = score(doc=877,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.2316521 = fieldWeight in 877, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.03125 = fieldNorm(doc=877)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.golem.de/news/schule-und-wissenschaft-nutzungsverbote-gegen-chatgpt-ausgesprochen-2301-171004.html
  11. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.013805566 = product of:
      0.027611133 = sum of:
        0.027611133 = product of:
          0.055222265 = sum of:
            0.055222265 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.055222265 = score(doc=1490,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  12. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.01
    0.013805566 = product of:
      0.027611133 = sum of:
        0.027611133 = product of:
          0.055222265 = sum of:
            0.055222265 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.055222265 = score(doc=835,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
  13. Rieger, F.: Lügende Computer (2023) 0.01
    0.013805566 = product of:
      0.027611133 = sum of:
        0.027611133 = product of:
          0.055222265 = sum of:
            0.055222265 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.055222265 = score(doc=912,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 3.2023 19:22:55
  14. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.006902783 = product of:
      0.013805566 = sum of:
        0.013805566 = product of:
          0.027611133 = sum of:
            0.027611133 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.027611133 = score(doc=4217,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:32:44