Search (30 results, page 1 of 2)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.027007826 = product of:
      0.05401565 = sum of:
        0.05401565 = product of:
          0.08102348 = sum of:
            0.007123646 = weight(_text_:s in 4888) [ClassicSimilarity], result of:
              0.007123646 = score(doc=4888,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.14414869 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
            0.07389983 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07389983 = score(doc=4888,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  2. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.02
    0.018005218 = product of:
      0.036010437 = sum of:
        0.036010437 = product of:
          0.05401565 = sum of:
            0.0047490974 = weight(_text_:s in 835) [ClassicSimilarity], result of:
              0.0047490974 = score(doc=835,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.09609913 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
            0.049266554 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.049266554 = score(doc=835,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
    Source
    c't. 2023, H.1, S.46- [https://www.heise.de/select/ct/2023/1/2233908274346530870]
  3. Roose, K.: ¬The brilliance and weirdness of ChatGPT (2022) 0.01
    0.012799252 = product of:
      0.025598504 = sum of:
        0.025598504 = product of:
          0.07679551 = sum of:
            0.07679551 = weight(_text_:k in 853) [ClassicSimilarity], result of:
              0.07679551 = score(doc=853,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.47329018 = fieldWeight in 853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=853)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  4. Janssen, J.-K.: ChatGPT-Klon läuft lokal auf jedem Rechner : Alpaca/LLaMA ausprobiert (2023) 0.01
    0.010666043 = product of:
      0.021332085 = sum of:
        0.021332085 = product of:
          0.063996255 = sum of:
            0.063996255 = weight(_text_:k in 927) [ClassicSimilarity], result of:
              0.063996255 = score(doc=927,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.39440846 = fieldWeight in 927, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=927)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  5. Baierer, K.; Zumstein, P.: Verbesserung der OCR in digitalen Sammlungen von Bibliotheken (2016) 0.01
    0.008532835 = product of:
      0.01706567 = sum of:
        0.01706567 = product of:
          0.051197007 = sum of:
            0.051197007 = weight(_text_:k in 2818) [ClassicSimilarity], result of:
              0.051197007 = score(doc=2818,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.31552678 = fieldWeight in 2818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2818)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.008211093 = product of:
      0.016422186 = sum of:
        0.016422186 = product of:
          0.049266554 = sum of:
            0.049266554 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.049266554 = score(doc=1490,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  7. Rieger, F.: Lügende Computer (2023) 0.01
    0.008211093 = product of:
      0.016422186 = sum of:
        0.016422186 = product of:
          0.049266554 = sum of:
            0.049266554 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.049266554 = score(doc=912,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    16. 3.2023 19:22:55
  8. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.01
    0.006399626 = product of:
      0.012799252 = sum of:
        0.012799252 = product of:
          0.038397755 = sum of:
            0.038397755 = weight(_text_:k in 870) [ClassicSimilarity], result of:
              0.038397755 = score(doc=870,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.23664509 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=870)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. RWI/PH: Auf der Suche nach dem entscheidenden Wort : die Häufung bestimmter Wörter innerhalb eines Textes macht diese zu Schlüsselwörtern (2012) 0.00
    0.0045252186 = product of:
      0.009050437 = sum of:
        0.009050437 = product of:
          0.02715131 = sum of:
            0.02715131 = weight(_text_:k in 331) [ClassicSimilarity], result of:
              0.02715131 = score(doc=331,freq=4.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.16733333 = fieldWeight in 331, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=331)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    "Die Dresdner Wissenschaftler haben die semantischen Eigenschaften von Texten mathematisch untersucht, indem sie zehn verschiedene englische Texte in unterschiedlichen Formen kodierten. Dazu zählt unter anderem die englische Ausgabe von Leo Tolstois "Krieg und Frieden". Beispielsweise übersetzten die Forscher Buchstaben innerhalb eines Textes in eine Binär-Sequenz. Dazu ersetzten sie alle Vokale durch eine Eins und alle Konsonanten durch eine Null. Mit Hilfe weiterer mathematischer Funktionen beleuchteten die Wissenschaftler dabei verschiedene Ebenen des Textes, also sowohl einzelne Vokale, Buchstaben als auch ganze Wörter, die in verschiedenen Formen kodiert wurden. Innerhalb des ganzen Textes lassen sich so wiederkehrende Muster finden. Diesen Zusammenhang innerhalb des Textes bezeichnet man als Langzeitkorrelation. Diese gibt an, ob zwei Buchstaben an beliebig weit voneinander entfernten Textstellen miteinander in Verbindung stehen - beispielsweise gibt es wenn wir an einer Stelle einen Buchstaben "K" finden, eine messbare höhere Wahrscheinlichkeit den Buchstaben "K" einige Seiten später nochmal zu finden. "Es ist zu erwarten, dass wenn es in einem Buch an einer Stelle um Krieg geht, die Wahrscheinlichkeit hoch ist das Wort Krieg auch einige Seiten später zu finden. Überraschend ist es, dass wir die hohe Wahrscheinlichkeit auch auf der Buchstabenebene finden", so Altmann.
  10. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0041055465 = product of:
      0.008211093 = sum of:
        0.008211093 = product of:
          0.024633277 = sum of:
            0.024633277 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.024633277 = score(doc=4217,freq=2.0), product of:
                0.15917034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04545348 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:32:44
  11. Snajder, J.: Distributional semantics of multi-word expressions (2013) 0.00
    9.893953E-4 = product of:
      0.0019787906 = sum of:
        0.0019787906 = product of:
          0.0059363716 = sum of:
            0.0059363716 = weight(_text_:s in 2868) [ClassicSimilarity], result of:
              0.0059363716 = score(doc=2868,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.120123915 = fieldWeight in 2868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2868)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Folien einer Präsentation anlässlich COST Action IC1207 PARSEME Meeting, Warsaw, September 16, 2013. Vgl. den Beitrag: Snajder, J., P. Almic: Modeling semantic compositionality of Croatian multiword expressions. In: Informatica. 39(2015) H.3, S.301-309.
  12. Lutz-Westphal, B.: ChatGPT und der "Faktor Mensch" im schulischen Mathematikunterricht (2023) 0.00
    9.893953E-4 = product of:
      0.0019787906 = sum of:
        0.0019787906 = product of:
          0.0059363716 = sum of:
            0.0059363716 = weight(_text_:s in 930) [ClassicSimilarity], result of:
              0.0059363716 = score(doc=930,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.120123915 = fieldWeight in 930, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=930)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Mitteilungen der Deutschen Mathematiker-Vereinigung. 2023, H.1, S.19-21
  13. Hahn, S.: DarkBERT ist mit Daten aus dem Darknet trainiert : ChatGPTs dunkler Bruder? (2023) 0.00
    9.893953E-4 = product of:
      0.0019787906 = sum of:
        0.0019787906 = product of:
          0.0059363716 = sum of:
            0.0059363716 = weight(_text_:s in 979) [ClassicSimilarity], result of:
              0.0059363716 = score(doc=979,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.120123915 = fieldWeight in 979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=979)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Weßels, D.: ChatGPT - ein Meilenstein der KI-Entwicklung (2023) 0.00
    9.893953E-4 = product of:
      0.0019787906 = sum of:
        0.0019787906 = product of:
          0.0059363716 = sum of:
            0.0059363716 = weight(_text_:s in 1051) [ClassicSimilarity], result of:
              0.0059363716 = score(doc=1051,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.120123915 = fieldWeight in 1051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1051)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Mitteilungen der Deutschen Mathematiker-Vereinigung. 2023, H.1, S.17-19
  15. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.00
    9.794515E-4 = product of:
      0.001958903 = sum of:
        0.001958903 = product of:
          0.0058767083 = sum of:
            0.0058767083 = weight(_text_:s in 156) [ClassicSimilarity], result of:
              0.0058767083 = score(doc=156,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.118916616 = fieldWeight in 156, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In this work, we present the results of research for evaluating a methodology for extracting mathematical terms for precalculus using the techniques for semantically-oriented statistical search. We use the corpus-based approach and the combination of different statistically-based techniques for extracting keywords, collocations and co-occurrences incorporated in the Sketch Engine software. We evaluate the collocations candidate terms for the basic concept function(s) and approve the related methodology by precalculus domain conceptual terms definitions. Finally, we offer a conceptual terms hierarchical representation and discuss the results with respect to their possible applications.
    Source
    Procedia Technology. 1(2012), S.464-468
  16. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.00
    8.5684157E-4 = product of:
      0.0017136831 = sum of:
        0.0017136831 = product of:
          0.005141049 = sum of:
            0.005141049 = weight(_text_:s in 2861) [ClassicSimilarity], result of:
              0.005141049 = score(doc=2861,freq=6.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.10403037 = fieldWeight in 2861, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Zadeh, B.Q.; Handschuh, S.: ¬The ACL RD-TEC : a dataset for benchmarking terminology extraction and classification in computational linguistics (2014) 0.00
    8.3952973E-4 = product of:
      0.0016790595 = sum of:
        0.0016790595 = product of:
          0.0050371783 = sum of:
            0.0050371783 = weight(_text_:s in 2803) [ClassicSimilarity], result of:
              0.0050371783 = score(doc=2803,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.101928525 = fieldWeight in 2803, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2803)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    S.52-63
  18. Kiela, D.; Clark, S.: Detecting compositionality of multi-word expressions using nearest neighbours in vector space models (2013) 0.00
    7.915163E-4 = product of:
      0.0015830325 = sum of:
        0.0015830325 = product of:
          0.0047490974 = sum of:
            0.0047490974 = weight(_text_:s in 1161) [ClassicSimilarity], result of:
              0.0047490974 = score(doc=1161,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.09609913 = fieldWeight in 1161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1161)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Nielsen, R.D.; Ward, W.; Martin, J.H.; Palmer, M.: Extracting a representation from text for semantic analysis (2008) 0.00
    7.915163E-4 = product of:
      0.0015830325 = sum of:
        0.0015830325 = product of:
          0.0047490974 = sum of:
            0.0047490974 = weight(_text_:s in 3365) [ClassicSimilarity], result of:
              0.0047490974 = score(doc=3365,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.09609913 = fieldWeight in 3365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3365)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    S.241-244
  20. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.00
    7.915163E-4 = product of:
      0.0015830325 = sum of:
        0.0015830325 = product of:
          0.0047490974 = sum of:
            0.0047490974 = weight(_text_:s in 872) [ClassicSimilarity], result of:
              0.0047490974 = score(doc=872,freq=8.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.09609913 = fieldWeight in 872, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.