-
¬Der Student aus dem Computer (2023)
0.02
0.024753649 = product of:
0.049507298 = sum of:
0.049507298 = product of:
0.099014595 = sum of:
0.099014595 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
0.099014595 = score(doc=1079,freq=2.0), product of:
0.18279788 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.05220068 = queryNorm
0.5416616 = fieldWeight in 1079, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.109375 = fieldNorm(doc=1079)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Date
- 27. 1.2023 16:22:55
-
Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023)
0.01
0.014144942 = product of:
0.028289884 = sum of:
0.028289884 = product of:
0.05657977 = sum of:
0.05657977 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
0.05657977 = score(doc=835,freq=2.0), product of:
0.18279788 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.05220068 = queryNorm
0.30952093 = fieldWeight in 835, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0625 = fieldNorm(doc=835)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Date
- 29.12.2022 18:22:55
-
Rieger, F.: Lügende Computer (2023)
0.01
0.014144942 = product of:
0.028289884 = sum of:
0.028289884 = product of:
0.05657977 = sum of:
0.05657977 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
0.05657977 = score(doc=912,freq=2.0), product of:
0.18279788 = queryWeight, product of:
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.05220068 = queryNorm
0.30952093 = fieldWeight in 912, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.5018296 = idf(docFreq=3622, maxDocs=44218)
0.0625 = fieldNorm(doc=912)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Date
- 16. 3.2023 19:22:55
-
Was ist GPT-3 und spricht das Modell Deutsch? (2022)
0.01
0.0106243 = product of:
0.0212486 = sum of:
0.0212486 = product of:
0.0424972 = sum of:
0.0424972 = weight(_text_:online in 868) [ClassicSimilarity], result of:
0.0424972 = score(doc=868,freq=2.0), product of:
0.15842392 = queryWeight, product of:
3.0349014 = idf(docFreq=5778, maxDocs=44218)
0.05220068 = queryNorm
0.2682499 = fieldWeight in 868, product of:
1.4142135 = tf(freq=2.0), with freq of:
2.0 = termFreq=2.0
3.0349014 = idf(docFreq=5778, maxDocs=44218)
0.0625 = fieldNorm(doc=868)
0.5 = coord(1/2)
0.5 = coord(1/2)
- Abstract
- GPT-3 ist ein Sprachverarbeitungsmodell der amerikanischen Non-Profit-Organisation OpenAI. Es verwendet Deep-Learning um Texte zu erstellen, zusammenzufassen, zu vereinfachen oder zu übersetzen. GPT-3 macht seit der Veröffentlichung eines Forschungspapiers wiederholt Schlagzeilen. Mehrere Zeitungen und Online-Publikationen testeten die Fähigkeiten und veröffentlichten ganze Artikel - verfasst vom KI-Modell - darunter The Guardian und Hacker News. Es wird von Journalisten rund um den Globus wahlweise als "Sprachtalent", "allgemeine künstliche Intelligenz" oder "eloquent" bezeichnet. Grund genug, die Fähigkeiten des künstlichen Sprachgenies unter die Lupe zu nehmen.