Search (1 results, page 1 of 1)

  • × author_ss:"Jha, A."
  • × theme_ss:"Computerlinguistik"
  • × year_i:[2020 TO 2030}
  1. Jha, A.: Why GPT-4 isn't all it's cracked up to be (2023) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 923) [ClassicSimilarity], result of:
              0.049993843 = score(doc=923,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 923, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=923)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    "I still don't know what to think about GPT-4, the new large language model (LLM) from OpenAI. On the one hand it is a remarkable product that easily passes the Turing test. If you ask it questions, via the ChatGPT interface, GPT-4 can easily produce fluid sentences largely indistinguishable from those a person might write. But on the other hand, amid the exceptional levels of hype and anticipation, it's hard to know where GPT-4 and other LLMs truly fit in the larger project of making machines intelligent.
    They might appear intelligent, but LLMs are nothing of the sort. They don't understand the meanings of the words they are using, nor the concepts expressed within the sentences they create. When asked how to bring a cow back to life, earlier versions of ChatGPT, for example, which ran on a souped-up version of GPT-3, would confidently provide a list of instructions. So-called hallucinations like this happen because language models have no concept of what a "cow" is or that "death" is a non-reversible state of being. LLMs do not have minds that can think about objects in the world and how they relate to each other. All they "know" is how likely it is that some sets of words will follow other sets of words, having calculated those probabilities from their training data. To make sense of all this, I spoke with Gary Marcus, an emeritus professor of psychology and neural science at New York University, for "Babbage", our science and technology podcast. Last year, as the world was transfixed by the sudden appearance of ChatGPT, he made some fascinating predictions about GPT-4.
    People use symbols to think about the world: if I say the words "cat", "house" or "aeroplane", you know instantly what I mean. Symbols can also be used to describe the way things are behaving (running, falling, flying) or they can represent how things should behave in relation to each other (a "+" means add the numbers before and after). Symbolic AI is a way to embed this human knowledge and reasoning into computer systems. Though the idea has been around for decades, it fell by the wayside a few years ago as deep learning-buoyed by the sudden easy availability of lots of training data and cheap computing power-became more fashionable. In the near future at least, there's no doubt people will find LLMs useful. But whether they represent a critical step on the path towards AGI, or rather just an intriguing detour, remains to be seen."