Search (80 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10162766 = sum of:
      0.08091931 = product of:
        0.24275793 = sum of:
          0.24275793 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24275793 = score(doc=562,freq=2.0), product of:
              0.43193975 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05094824 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020708349 = product of:
        0.041416697 = sum of:
          0.041416697 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041416697 = score(doc=562,freq=2.0), product of:
              0.17841205 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05094824 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Xianghao, G.; Yixin, Z.; Li, Y.: ¬A new method of news test understanding and abstracting based on speech acts theory (1998) 0.07
    0.06916585 = product of:
      0.1383317 = sum of:
        0.1383317 = product of:
          0.2766634 = sum of:
            0.2766634 = weight(_text_:news in 3532) [ClassicSimilarity], result of:
              0.2766634 = score(doc=3532,freq=10.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                1.0359797 = fieldWeight in 3532, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3532)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a method for the automated analysis and comprehension of foreign affairs news produced by a Chinese news agency. Notes that the development of the method was prededed by a study of the structuring rules of the news. Describes how an abstract of the news story is produced automatically from the analysis. Stresses the main aim of the work which is to use specch act theory to analyse and classify sentences
  3. Blanchon, E.: Terminology software : pt.1.2 (1995) 0.05
    0.054130834 = product of:
      0.10826167 = sum of:
        0.10826167 = product of:
          0.21652333 = sum of:
            0.21652333 = weight(_text_:news in 6408) [ClassicSimilarity], result of:
              0.21652333 = score(doc=6408,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.8107823 = fieldWeight in 6408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6408)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    TermNet news. 1995, no.46/47, S.5-35 (pt.1); no.48, S.1-34 (pt.2)
  4. Biselli, A.: Unter Generalverdacht durch Algorithmen (2014) 0.05
    0.046397857 = product of:
      0.092795715 = sum of:
        0.092795715 = product of:
          0.18559143 = sum of:
            0.18559143 = weight(_text_:news in 809) [ClassicSimilarity], result of:
              0.18559143 = score(doc=809,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.6949563 = fieldWeight in 809, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.09375 = fieldNorm(doc=809)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    http://www.golem.de/news/textanalyse-unter-generalverdacht-durch-algorithmen-1402-104637.html
  5. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.040459655 = product of:
      0.08091931 = sum of:
        0.08091931 = product of:
          0.24275793 = sum of:
            0.24275793 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.24275793 = score(doc=862,freq=2.0), product of:
                0.43193975 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05094824 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  6. Hayes, P.J.; Knecht, L.E.; Cellio, M.J.: ¬A news story categorization system (1988) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 1954) [ClassicSimilarity], result of:
              0.15465952 = score(doc=1954,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 1954, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1954)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Janssen, J.-K.: ChatGPT-Klon läuft lokal auf jedem Rechner : Alpaca/LLaMA ausprobiert (2023) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 927) [ClassicSimilarity], result of:
              0.15465952 = score(doc=927,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 927, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=927)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.heise.de/news/c-t-3003-ChatGPT-Klon-laeuft-lokal-auf-jedem-Rechner-Alpaca-LLaMA-ausprobiert-8004159.html?view=print
  8. Hahn, S.: DarkBERT ist mit Daten aus dem Darknet trainiert : ChatGPTs dunkler Bruder? (2023) 0.04
    0.03866488 = product of:
      0.07732976 = sum of:
        0.07732976 = product of:
          0.15465952 = sum of:
            0.15465952 = weight(_text_:news in 979) [ClassicSimilarity], result of:
              0.15465952 = score(doc=979,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.57913023 = fieldWeight in 979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.078125 = fieldNorm(doc=979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.heise.de/news/DarkBERT-ist-mit-Daten-aus-dem-Darknet-trainiert-ChatGPTs-dunkler-Bruder-9060809.html?view=print
  9. Panicheva, P.; Cardiff, J.; Rosso, P.: Identifying subjective statements in news titles using a personal sense annotation framework (2013) 0.03
    0.03280824 = product of:
      0.06561648 = sum of:
        0.06561648 = product of:
          0.13123296 = sum of:
            0.13123296 = weight(_text_:news in 968) [ClassicSimilarity], result of:
              0.13123296 = score(doc=968,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.49140832 = fieldWeight in 968, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.046875 = fieldNorm(doc=968)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subjective language contains information about private states. The goal of subjective language identification is to determine that a private state is expressed, without considering its polarity or specific emotion. A component of word meaning, "Personal Sense," has clear potential in the field of subjective language identification, as it reflects a meaning of words in terms of unique personal experience and carries personal characteristics. In this paper we investigate how Personal Sense can be harnessed for the purpose of identifying subjectivity in news titles. In the process, we develop a new Personal Sense annotation framework for annotating and classifying subjectivity, polarity, and emotion. The Personal Sense framework yields high performance in a fine-grained subsentence subjectivity classification. Our experiments demonstrate lexico-syntactic features to be useful for the identification of subjectivity indicators and the targets that receive the subjective Personal Sense.
  10. AL-Smadi, M.; Jaradat, Z.; AL-Ayyoub, M.; Jararweh, Y.: Paraphrase identification and semantic text similarity analysis in Arabic news tweets using lexical, syntactic, and semantic features (2017) 0.03
    0.03280824 = product of:
      0.06561648 = sum of:
        0.06561648 = product of:
          0.13123296 = sum of:
            0.13123296 = weight(_text_:news in 5095) [ClassicSimilarity], result of:
              0.13123296 = score(doc=5095,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.49140832 = fieldWeight in 5095, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5095)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rapid growth in digital information has raised considerable challenges in particular when it comes to automated content analysis. Social media such as twitter share a lot of its users' information about their events, opinions, personalities, etc. Paraphrase Identification (PI) is concerned with recognizing whether two texts have the same/similar meaning, whereas the Semantic Text Similarity (STS) is concerned with the degree of that similarity. This research proposes a state-of-the-art approach for paraphrase identification and semantic text similarity analysis in Arabic news tweets. The approach adopts several phases of text processing, features extraction and text classification. Lexical, syntactic, and semantic features are extracted to overcome the weakness and limitations of the current technologies in solving these tasks for the Arabic language. Maximum Entropy (MaxEnt) and Support Vector Regression (SVR) classifiers are trained using these features and are evaluated using a dataset prepared for this research. The experimentation results show that the approach achieves good results in comparison to the baseline results.
  11. Pritchard-Schoch, T.: Natural language comes of age (1993) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 2570) [ClassicSimilarity], result of:
              0.12372762 = score(doc=2570,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 2570, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2570)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses natural languages and the natural language implementations of Westlaw's full-text legal documents, Westlaw Is Natural. Natural language is not aritificial intelligence but a hybrid of linguistics, mathematics and statistics. Provides 3 classes of retrieval models. Explains how Westlaw processes an English query. Assesses WIN. Covers WIN enhancements; the natural language features of Congressional Quarterly's Washington Alert using a document for a query; the personal librarian front end search software and Dowquest from Dow Jones news/retrieval. Conmsiders whether natural language encourages fuzzy thinking and whether Boolean logic will still be needed
  12. NUANCE XT9 : Neue Schreibhilfe für Mobiltelefone (2008) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 2268) [ClassicSimilarity], result of:
              0.12372762 = score(doc=2268,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 2268, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2268)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Aktuell News
  13. Was ist GPT-3 und spricht das Modell Deutsch? (2022) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 868) [ClassicSimilarity], result of:
              0.12372762 = score(doc=868,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=868)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    GPT-3 ist ein Sprachverarbeitungsmodell der amerikanischen Non-Profit-Organisation OpenAI. Es verwendet Deep-Learning um Texte zu erstellen, zusammenzufassen, zu vereinfachen oder zu übersetzen.  GPT-3 macht seit der Veröffentlichung eines Forschungspapiers wiederholt Schlagzeilen. Mehrere Zeitungen und Online-Publikationen testeten die Fähigkeiten und veröffentlichten ganze Artikel - verfasst vom KI-Modell - darunter The Guardian und Hacker News. Es wird von Journalisten rund um den Globus wahlweise als "Sprachtalent", "allgemeine künstliche Intelligenz" oder "eloquent" bezeichnet. Grund genug, die Fähigkeiten des künstlichen Sprachgenies unter die Lupe zu nehmen.
  14. Bischoff, M.: Wie eine KI lernt, sich selbst zu erklären (2023) 0.03
    0.030931905 = product of:
      0.06186381 = sum of:
        0.06186381 = product of:
          0.12372762 = sum of:
            0.12372762 = weight(_text_:news in 956) [ClassicSimilarity], result of:
              0.12372762 = score(doc=956,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.4633042 = fieldWeight in 956, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.spektrum.de/news/sprachmodelle-auf-dem-weg-zu-einer-erklaerbaren-ki/2132727#Echobox=1682669561?utm_source=pocket-newtab-global-de-DE
  15. Warner, A.J.: Natural language processing (1987) 0.03
    0.027611133 = product of:
      0.055222265 = sum of:
        0.055222265 = product of:
          0.11044453 = sum of:
            0.11044453 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11044453 = score(doc=337,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  16. Malo, P.; Sinha, A.; Korhonen, P.; Wallenius, J.; Takala, P.: Good debt or bad debt : detecting semantic orientations in economic texts (2014) 0.03
    0.0273402 = product of:
      0.0546804 = sum of:
        0.0546804 = product of:
          0.1093608 = sum of:
            0.1093608 = weight(_text_:news in 1226) [ClassicSimilarity], result of:
              0.1093608 = score(doc=1226,freq=4.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40950692 = fieldWeight in 1226, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1226)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of robo-readers to analyze news texts is an emerging technology trend in computational finance. Recent research has developed sophisticated financial polarity lexicons for investigating how financial sentiments relate to future company performance. However, based on experience from fields that commonly analyze sentiment, it is well known that the overall semantic orientation of a sentence may differ from that of individual words. This article investigates how semantic orientations can be better detected in financial and economic news by accommodating the overall phrase-structure information and domain-specific use of language. Our three main contributions are the following: (a) a human-annotated finance phrase bank that can be used for training and evaluating alternative models; (b) a technique to enhance financial lexicons with attributes that help to identify expected direction of events that affect sentiment; and (c) a linearized phrase-structure model for detecting contextual semantic orientations in economic texts. The relevance of the newly added lexicon features and the benefit of using the proposed learning algorithm are demonstrated in a comparative study against general sentiment models as well as the popular word frequency models used in recent financial studies. The proposed framework is parsimonious and avoids the explosion in feature space caused by the use of conventional n-gram features.
  17. Mock, K.J.; Vemuri, V.R.: Information filtering via hill climbing, WordNet, and index patterns (1997) 0.03
    0.027065417 = product of:
      0.054130834 = sum of:
        0.054130834 = product of:
          0.10826167 = sum of:
            0.10826167 = weight(_text_:news in 1517) [ClassicSimilarity], result of:
              0.10826167 = score(doc=1517,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40539116 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The INFOS (Intelligent News Filtering Organizational System) project is designed to reduce the user's search burden by automatically categorising data as relevant or irrelevant based upon user interests. These predictions are learned automatically based upon features taken from input articles and collaborative features derived from other users. The filtering is performed by a hybrid technique that combines elements of a keyword-based hill climbing method, knowledge-based conceptual representation via WordNet, and partial parsing via index patterns. The hybrid systems integrating all these approaches combines the benefits of each while maintaing robustness and acalability
  18. Moens, M.F.; Dumortier, J.: Use of a text grammar for generating highlight abstracts of magazine articles (2000) 0.03
    0.027065417 = product of:
      0.054130834 = sum of:
        0.054130834 = product of:
          0.10826167 = sum of:
            0.10826167 = weight(_text_:news in 4540) [ClassicSimilarity], result of:
              0.10826167 = score(doc=4540,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40539116 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Browsing a database of article abstracts is one way to select and buy relevant magazine articles online. Our research contributes to the design and development of text grammars for abstracting texts in unlimited subject domains. We developed a system that parses texts based on the text grammar of a specific text type and that extracts sentences and statements which are relevant for inclusion in the abstracts. The system employs knowledge of the discourse patterns that are typical of news stories. The results are encouraging and demonstrate the importance of discourse structures in text summarisation.
  19. Holland, M.: Erstes wissenschaftliches Buch eines Algorithmus' veröffentlicht (2019) 0.03
    0.027065417 = product of:
      0.054130834 = sum of:
        0.054130834 = product of:
          0.10826167 = sum of:
            0.10826167 = weight(_text_:news in 5227) [ClassicSimilarity], result of:
              0.10826167 = score(doc=5227,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40539116 = fieldWeight in 5227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Heise Online: News
  20. Al-Khatib, K.; Ghosa, T.; Hou, Y.; Waard, A. de; Freitag, D.: Argument mining for scholarly document processing : taking stock and looking ahead (2021) 0.03
    0.027065417 = product of:
      0.054130834 = sum of:
        0.054130834 = product of:
          0.10826167 = sum of:
            0.10826167 = weight(_text_:news in 568) [ClassicSimilarity], result of:
              0.10826167 = score(doc=568,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.40539116 = fieldWeight in 568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=568)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Argument mining targets structures in natural language related to interpretation and persuasion. Most scholarly discourse involves interpreting experimental evidence and attempting to persuade other scientists to adopt the same conclusions, which could benefit from argument mining techniques. However, While various argument mining studies have addressed student essays and news articles, those that target scientific discourse are still scarce. This paper surveys existing work in argument mining of scholarly discourse, and provides an overview of current models, data, tasks, and applications. We identify a number of key challenges confronting argument mining in the scientific domain, and suggest some possible solutions and future directions.

Years

Languages

  • e 53
  • d 26
  • chi 1
  • f 1
  • m 1
  • More… Less…

Types

  • a 62
  • el 14
  • m 7
  • s 6
  • p 2
  • x 2
  • d 1
  • More… Less…