Search (1 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  • × year_i:[2020 TO 2030}
  1. ChatGPT : Optimizing language models for dalogue (2022) 0.02
    0.017223522 = product of:
      0.06889409 = sum of:
        0.06889409 = product of:
          0.13778818 = sum of:
            0.13778818 = weight(_text_:instruction in 836) [ClassicSimilarity], result of:
              0.13778818 = score(doc=836,freq=2.0), product of:
                0.26266864 = queryWeight, product of:
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.04425879 = queryNorm
                0.52457035 = fieldWeight in 836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.934836 = idf(docFreq=317, maxDocs=44218)
                  0.0625 = fieldNorm(doc=836)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.