Search (1 results, page 1 of 1)

  • × author_ss:"Ciolino, M."
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.07
    0.074962646 = product of:
      0.11244397 = sum of:
        0.082684144 = product of:
          0.24805243 = sum of:
            0.24805243 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24805243 = score(doc=862,freq=2.0), product of:
                0.44136027 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.052059412 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.029759828 = weight(_text_:the in 862) [ClassicSimilarity], result of:
          0.029759828 = score(doc=862,freq=24.0), product of:
            0.08213748 = queryWeight, product of:
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.052059412 = queryNorm
            0.36231726 = fieldWeight in 862, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              1.5777643 = idf(docFreq=24812, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN