Search (6 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.026601069 = product of:
      0.106404275 = sum of:
        0.106404275 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
          0.106404275 = score(doc=4888,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.46428138 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
      0.25 = coord(1/4)
    
    Date
    1. 3.2013 14:56:22
  2. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.02
    0.01787368 = product of:
      0.07149472 = sum of:
        0.07149472 = weight(_text_:objects in 923) [ClassicSimilarity], result of:
          0.07149472 = score(doc=923,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.20553327 = fieldWeight in 923, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=923)
      0.25 = coord(1/4)
    
    Abstract
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
  3. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.02
    0.017734047 = product of:
      0.07093619 = sum of:
        0.07093619 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
          0.07093619 = score(doc=1490,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.30952093 = fieldWeight in 1490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=1490)
      0.25 = coord(1/4)
    
    Date
    22. 3.2015 9:30:24
  4. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.02
    0.017734047 = product of:
      0.07093619 = sum of:
        0.07093619 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
          0.07093619 = score(doc=835,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.30952093 = fieldWeight in 835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=835)
      0.25 = coord(1/4)
    
    Date
    29.12.2022 18:22:55
  5. Rieger, F.: Lügende Computer (2023) 0.02
    0.017734047 = product of:
      0.07093619 = sum of:
        0.07093619 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
          0.07093619 = score(doc=912,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.30952093 = fieldWeight in 912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=912)
      0.25 = coord(1/4)
    
    Date
    16. 3.2023 19:22:55
  6. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.0088670235 = product of:
      0.035468094 = sum of:
        0.035468094 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
          0.035468094 = score(doc=4217,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.15476047 = fieldWeight in 4217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4217)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:32:44

Authors

Languages