Search (3 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Zhai, X.: ChatGPT user experience: : implications for education (2022) 0.03
    0.030214114 = product of:
      0.06042823 = sum of:
        0.06042823 = product of:
          0.12085646 = sum of:
            0.12085646 = weight(_text_:assessment in 849) [ClassicSimilarity], result of:
              0.12085646 = score(doc=849,freq=4.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.43132967 = fieldWeight in 849, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    ChatGPT, a general-purpose conversation chatbot released on November 30, 2022, by OpenAI, is expected to impact every aspect of society. However, the potential impacts of this NLP tool on education remain unknown. Such impact can be enormous as the capacity of ChatGPT may drive changes to educational learning goals, learning activities, and assessment and evaluation practices. This study was conducted by piloting ChatGPT to write an academic paper, titled Artificial Intelligence for Education (see Appendix A). The piloting result suggests that ChatGPT is able to help researchers write a paper that is coherent, (partially) accurate, informative, and systematic. The writing is extremely efficient (2-3 hours) and involves very limited professional knowledge from the author. Drawing upon the user experience, I reflect on the potential impacts of ChatGPT, as well as similar AI tools, on education. The paper concludes by suggesting adjusting learning goals-students should be able to use AI tools to conduct subject-domain tasks and education should focus on improving students' creativity and critical thinking rather than general skills. To accomplish the learning goals, researchers should design AI-involved learning tasks to engage students in solving real-world problems. ChatGPT also raises concerns that students may outsource assessment tasks. This paper concludes that new formats of assessments are needed to focus on creativity and critical thinking that AI cannot substitute.
  2. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.03
    0.025637524 = product of:
      0.05127505 = sum of:
        0.05127505 = product of:
          0.1025501 = sum of:
            0.1025501 = weight(_text_:assessment in 870) [ClassicSimilarity], result of:
              0.1025501 = score(doc=870,freq=2.0), product of:
                0.2801951 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.050750602 = queryNorm
                0.36599535 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
  3. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020628018 = product of:
      0.041256037 = sum of:
        0.041256037 = product of:
          0.08251207 = sum of:
            0.08251207 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08251207 = score(doc=4888,freq=2.0), product of:
                0.17771997 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050750602 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22

Types