Search (18 results, page 1 of 1)

  • × year_i:[2020 TO 2030}
  • × theme_ss:"Computerlinguistik"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.18
    0.18177032 = product of:
      0.4544258 = sum of:
        0.04544258 = product of:
          0.13632774 = sum of:
            0.13632774 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.13632774 = score(doc=862,freq=2.0), product of:
                0.24256827 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028611459 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.13632774 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.13632774 = score(doc=862,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.13632774 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.13632774 = score(doc=862,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.13632774 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.13632774 = score(doc=862,freq=2.0), product of:
            0.24256827 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028611459 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(4/10)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Räwel, J.: Automatisierte Kommunikation (2023) 0.01
    0.01181149 = product of:
      0.1181149 = sum of:
        0.1181149 = weight(_text_:kommunikation in 909) [ClassicSimilarity], result of:
          0.1181149 = score(doc=909,freq=16.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.8031421 = fieldWeight in 909, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.0390625 = fieldNorm(doc=909)
      0.1 = coord(1/10)
    
    Content
    In den Sozialwissenschaften gibt es zwei fundamental unterschiedliche Auffassungen, was unter Kommunikation zu verstehen ist. Angelehnt an das Alltagsverständnis und daher auch in den Sozialwissenschaften dominant, gehen "handlungstheoretische" Vorstellungen von Kommunikation davon aus, dass diese instrumentellen Charakters ist. Es sind Menschen in ihrer physisch-psychischen Kompaktheit, die mittels Kommunikation, sei dies in mündlicher oder schriftlicher Form, Informationen austauschen. Kommunizierende werden nach dieser Vorstellung wechselseitig als Sender bzw. Empfänger von Informationen verstanden. Kommunikation dient der mehr oder minder erfolgreichen Übertragung von Informationen von Mensch zu Mensch. Davon paradigmatisch zu unterscheiden sind "systemtheoretische" Vorstellungen von Kommunikation, wie sie wesentlich von dem 1998 verstorbenen Soziologen Niklas Luhmann in Vorschlag gebracht wurden. Nach diesem Paradigma wird behauptet, dass ihr "Eigenleben" charakteristisch ist. Kommunikation zeichnet sich durch ihre rekursive Eigendynamik aus, welche die Möglichkeiten der Kommunizierenden begrenzt, diese zu steuern und zu beeinflussen. Gemäß dieser Konzeption befindet sich individuelles Bewusstseins - in ihrer je gedanklichen Eigendynamik - in der Umwelt von Kommunikationssystemen und vermag diese mittels Sprache lediglich zu irritieren, nicht aber kontrollierend zu determinieren. Dies schon deshalb nicht, weil in Kommunikationssystemen, etwa einem Gespräch als einem "Interaktionssystem", mindestens zwei bewusste Systeme mit ihrer je unterschiedlichen gedanklichen Eigendynamik beteiligt sind.
    Source
    https://www.telepolis.de/features/Automatisierte-Kommunikation-7520683.html?seite=all
  3. Weßels, D.: ChatGPT - ein Meilenstein der KI-Entwicklung (2022) 0.00
    0.0033407938 = product of:
      0.033407938 = sum of:
        0.033407938 = weight(_text_:kommunikation in 929) [ClassicSimilarity], result of:
          0.033407938 = score(doc=929,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.22716287 = fieldWeight in 929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.03125 = fieldNorm(doc=929)
      0.1 = coord(1/10)
    
    Content
    "Seit dem 30. November 2022 ist meine Welt - und die vieler Bildungsexpertinnen und Bildungsexperten - gefühlt eine andere Welt, die uns in eine "Neuzeit" führt, von der wir noch nicht wissen, ob wir sie lieben oder fürchten sollen. Der Ableger und Prototyp ChatGPT des derzeit (zumindest in der westlichen Welt) führenden generativen KI-Sprachmodells GPT-3 von OpenAI wurde am 30. November veröffentlicht und ist seit dieser Zeit für jeden frei zugänglich und kostenlos. Was zunächst als unspektakuläre Ankündigung von OpenAI anmutete, nämlich das seit 2020 bereits verfügbare KI-Sprachmodell GPT-3 nun in leicht modifizierter Version (GPT-3,5) als Chat-Variante für die Echtzeit-Kommunikation bereitzustellen, entpuppt sich in der Anwendung - aus Sicht der Nutzerinnen und Nutzer - als Meilenstein der KI-Entwicklung. Fakt ist, dass die Leistungsvielfalt und -stärke von ChatGPT selbst IT-Expertinnen und -Experten überrascht hat und sie zu einer Fülle von Superlativen in der Bewertung veranlasst, jedoch immer in Kombination mit Hinweisen zur fehlenden Faktentreue und Verlässlichkeit derartiger generativer KI-Modelle. Mit WebGPT von OpenAI steht aber bereits ein Forschungsprototyp bereit, der mit integrierter Internetsuchfunktion die "Halluzinationen" aktueller GPT-Varianten ausmerzen könnte. Für den Bildungssektor stellt sich die Frage, wie sich das Lehren und Lernen an Hochschulen (und nicht nur dort) verändern wird, wenn derartige KI-Werkzeuge omnipräsent sind und mit ihrer Hilfe nicht nur die Hausarbeit "per Knopfdruck" erstellt werden kann. Beeindruckend ist zudem die fachliche Bandbreite von ChatGPT, siehe den Tweet von @davidtsong, der ChatGPT dem Studierfähigkeitstest SAT unterzogen hat."
  4. Barthel, J.; Ciesielski, R.: Regeln zu ChatGPT an Unis oft unklar : KI in der Bildung (2023) 0.00
    0.00225839 = product of:
      0.022583898 = sum of:
        0.022583898 = product of:
          0.06775169 = sum of:
            0.06775169 = weight(_text_:29 in 925) [ClassicSimilarity], result of:
              0.06775169 = score(doc=925,freq=6.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.6731671 = fieldWeight in 925, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=925)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    29. 3.2023 13:23:26
    29. 3.2023 13:29:19
  5. ¬Der Student aus dem Computer (2023) 0.00
    0.0018090137 = product of:
      0.018090136 = sum of:
        0.018090136 = product of:
          0.05427041 = sum of:
            0.05427041 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.05427041 = score(doc=1079,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    27. 1.2023 16:22:55
  6. Müller, P.: Text-Automat mit Tücken (2023) 0.00
    0.0015646582 = product of:
      0.015646582 = sum of:
        0.015646582 = product of:
          0.046939746 = sum of:
            0.046939746 = weight(_text_:29 in 481) [ClassicSimilarity], result of:
              0.046939746 = score(doc=481,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.46638384 = fieldWeight in 481, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=481)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Source
    Pirmasenser Zeitung. Nr. 29 vom 03.02.2023, S.2
  7. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.00
    0.0013467129 = product of:
      0.013467129 = sum of:
        0.013467129 = weight(_text_:web in 872) [ClassicSimilarity], result of:
          0.013467129 = score(doc=872,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.14422815 = fieldWeight in 872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=872)
      0.1 = coord(1/10)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  8. Bischoff, M.: Was steckt hinter ChatGTP & Co? (2023) 0.00
    0.0010431055 = product of:
      0.010431055 = sum of:
        0.010431055 = product of:
          0.031293165 = sum of:
            0.031293165 = weight(_text_:29 in 1013) [ClassicSimilarity], result of:
              0.031293165 = score(doc=1013,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.31092256 = fieldWeight in 1013, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1013)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    12. 4.2023 20:29:54
  9. Morris, V.: Automated language identification of bibliographic resources (2020) 0.00
    0.0010337222 = product of:
      0.010337221 = sum of:
        0.010337221 = product of:
          0.031011663 = sum of:
            0.031011663 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.031011663 = score(doc=5749,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    2. 3.2020 19:04:22
  10. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.00
    0.0010337222 = product of:
      0.010337221 = sum of:
        0.010337221 = product of:
          0.031011663 = sum of:
            0.031011663 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.031011663 = score(doc=835,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    29.12.2022 18:22:55
  11. Rieger, F.: Lügende Computer (2023) 0.00
    0.0010337222 = product of:
      0.010337221 = sum of:
        0.010337221 = product of:
          0.031011663 = sum of:
            0.031011663 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.031011663 = score(doc=912,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    16. 3.2023 19:22:55
  12. Thomas, I.S.; Wang, J.; GPT-3: Was euch zu Menschen macht : Antworten einer künstlichen Intelligenz auf die großen Fragen des Lebens (2022) 0.00
    7.823291E-4 = product of:
      0.007823291 = sum of:
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 878) [ClassicSimilarity], result of:
              0.023469873 = score(doc=878,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=878)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    7. 1.2023 18:41:29
  13. Albrecht, I.: GPT-3: die Zukunft studentischer Hausarbeiten oder eine Bedrohung der wissenschaftlichen Integrität? (2023) 0.00
    7.823291E-4 = product of:
      0.007823291 = sum of:
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 881) [ClassicSimilarity], result of:
              0.023469873 = score(doc=881,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 881, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=881)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    28. 1.2022 11:05:29
  14. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S.; Wang, Z.: ChatGPT and a new academic reality : artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing (2023) 0.00
    7.823291E-4 = product of:
      0.007823291 = sum of:
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 943) [ClassicSimilarity], result of:
              0.023469873 = score(doc=943,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 943, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=943)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    19. 4.2023 19:29:44
  15. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.00
    6.51941E-4 = product of:
      0.00651941 = sum of:
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
              0.019558229 = score(doc=103,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  16. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.00
    6.51941E-4 = product of:
      0.00651941 = sum of:
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 1042) [ClassicSimilarity], result of:
              0.019558229 = score(doc=1042,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 1042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1042)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    29. 8.2023 19:21:01
  17. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.00
    6.4607634E-4 = product of:
      0.006460763 = sum of:
        0.006460763 = product of:
          0.019382289 = sum of:
            0.019382289 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.019382289 = score(doc=1171,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Date
    23.11.2023 19:07:22
  18. Donath, A.: Nutzungsverbote für ChatGPT (2023) 0.00
    5.2155275E-4 = product of:
      0.0052155275 = sum of:
        0.0052155275 = product of:
          0.015646582 = sum of:
            0.015646582 = weight(_text_:29 in 877) [ClassicSimilarity], result of:
              0.015646582 = score(doc=877,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.15546128 = fieldWeight in 877, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=877)
          0.33333334 = coord(1/3)
      0.1 = coord(1/10)
    
    Content
    Milliardenbewertung für ChatGPT OpenAI, das Chatbot ChatGPT betreibt, befindet sich laut einem Bericht des Wall Street Journals in Gesprächen zu einem Aktienverkauf. Das WSJ meldete, der mögliche Verkauf der Aktien würde die Bewertung von OpenAI auf 29 Milliarden US-Dollar anheben. Sorgen auch in Brandenburg Der brandenburgische SPD-Abgeordnete Erik Stohn stellte mit Hilfe von ChatGPT eine Kleine Anfrage an den Brandenburger Landtag, in der er fragte, wie die Landesregierung sicherstelle, dass Studierende bei maschinell erstellten Texten gerecht beurteilt und benotet würden. Er fragte auch nach Maßnahmen, die ergriffen worden seien, um sicherzustellen, dass maschinell erstellte Texte nicht in betrügerischer Weise von Studierenden bei der Bewertung von Studienleistungen verwendet werden könnten.