Search (7 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.048941944 = product of:
      0.24470972 = sum of:
        0.24470972 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
          0.24470972 = score(doc=862,freq=2.0), product of:
            0.43541256 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051357865 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.2 = coord(1/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. ¬Der Student aus dem Computer (2023) 0.02
    0.01948319 = product of:
      0.09741595 = sum of:
        0.09741595 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
          0.09741595 = score(doc=1079,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.5416616 = fieldWeight in 1079, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=1079)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  3. Xiang, R.; Chersoni, E.; Lu, Q.; Huang, C.-R.; Li, W.; Long, Y.: Lexical data augmentation for sentiment analysis (2021) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 392) [ClassicSimilarity], result of:
          0.06058549 = score(doc=392,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=392)
      0.2 = coord(1/5)
    
    Abstract
    Machine learning methods, especially deep learning models, have achieved impressive performance in various natural language processing tasks including sentiment analysis. However, deep learning models are more demanding for training data. Data augmentation techniques are widely used to generate new instances based on modifications to existing data or relying on external knowledge bases to address annotated data scarcity, which hinders the full potential of machine learning techniques. This paper presents our work using part-of-speech (POS) focused lexical substitution for data augmentation (PLSDA) to enhance the performance of machine learning algorithms in sentiment analysis. We exploit POS information to identify words to be replaced and investigate different augmentation strategies to find semantically related substitutions when generating new instances. The choice of POS tags as well as a variety of strategies such as semantic-based substitution methods and sampling methods are discussed in detail. Performance evaluation focuses on the comparison between PLSDA and two previous lexical substitution-based data augmentation methods, one of which is thesaurus-based, and the other is lexicon manipulation based. Our approach is tested on five English sentiment analysis benchmarks: SST-2, MR, IMDB, Twitter, and AirRecord. Hyperparameters such as the candidate similarity threshold and number of newly generated instances are optimized. Results show that six classifiers (SVM, LSTM, BiLSTM-AT, bidirectional encoder representations from transformers [BERT], XLNet, and RoBERTa) trained with PLSDA achieve accuracy improvement of more than 0.6% comparing to two previous lexical substitution methods averaged on five benchmarks. Introducing POS constraint and well-designed augmentation strategies can improve the reliability of lexical data augmentation methods. Consequently, PLSDA significantly improves the performance of sentiment analysis algorithms.
  4. Morris, V.: Automated language identification of bibliographic resources (2020) 0.01
    0.011133251 = product of:
      0.055666253 = sum of:
        0.055666253 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
          0.055666253 = score(doc=5749,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.30952093 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
      0.2 = coord(1/5)
    
    Date
    2. 3.2020 19:04:22
  5. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.01
    0.011133251 = product of:
      0.055666253 = sum of:
        0.055666253 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
          0.055666253 = score(doc=835,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.30952093 = fieldWeight in 835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=835)
      0.2 = coord(1/5)
    
    Date
    29.12.2022 18:22:55
  6. Rieger, F.: Lügende Computer (2023) 0.01
    0.011133251 = product of:
      0.055666253 = sum of:
        0.055666253 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
          0.055666253 = score(doc=912,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.30952093 = fieldWeight in 912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=912)
      0.2 = coord(1/5)
    
    Date
    16. 3.2023 19:22:55
  7. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.006958282 = product of:
      0.03479141 = sum of:
        0.03479141 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
          0.03479141 = score(doc=1171,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.19345059 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
      0.2 = coord(1/5)
    
    Date
    23.11.2023 19:07:22

Languages

Types