Search (1 results, page 1 of 1)

  • × author_ss:"Jha, A."
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.00
    0.004316519 = product of:
      0.017266076 = sum of:
        0.017266076 = product of:
          0.034532152 = sum of:
            0.034532152 = weight(_text_:project in 923) [ClassicSimilarity], result of:
              0.034532152 = score(doc=923,freq=2.0), product of:
                0.21156175 = queryWeight, product of:
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.050121464 = queryNorm
                0.16322492 = fieldWeight in 923, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.220981 = idf(docFreq=1764, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    "I still don't know what to think about GPT-4, the new large language model (LLM) from OpenAI. On the one hand it is a remarkable product that easily passes the Turing test. If you ask it questions, via the ChatGPT interface, GPT-4 can easily produce fluid sentences largely indistinguishable from those a person might write. But on the other hand, amid the exceptional levels of hype and anticipation, it's hard to know where GPT-4 and other LLMs truly fit in the larger project of making machines intelligent.