Search (66 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.07164219 = sum of:
      0.053415764 = product of:
        0.21366306 = sum of:
          0.21366306 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21366306 = score(doc=562,freq=2.0), product of:
              0.38017118 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.044842023 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.018226424 = product of:
        0.03645285 = sum of:
          0.03645285 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03645285 = score(doc=562,freq=2.0), product of:
              0.15702912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044842023 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Rahmstorf, G.: Compositional semantics and concept representation (1991) 0.03
    0.026967188 = product of:
      0.053934377 = sum of:
        0.053934377 = product of:
          0.10786875 = sum of:
            0.10786875 = weight(_text_:e.g in 6673) [ClassicSimilarity], result of:
              0.10786875 = score(doc=6673,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.4611081 = fieldWeight in 6673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Concept systems are not only used in the sciences, but also in secondary supporting fields, e.g. in libraries, in documentation, in terminology and increasingly also in knowledge representation. It is suggested that the development of concept systems be based on semantic analysis. Methodical steps are described. The principle of morpho-syntactic composition in semantics will serve as a theoretical basis for the suggested method. The implications and limitations of this principle will be demonstrated
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.026707882 = product of:
      0.053415764 = sum of:
        0.053415764 = product of:
          0.21366306 = sum of:
            0.21366306 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21366306 = score(doc=862,freq=2.0), product of:
                0.38017118 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044842023 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Warner, A.J.: Natural language processing (1987) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.0972076 = score(doc=337,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  5. Gomez, F.: Learning word syntactic subcategorizations interactively (1995) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 3130) [ClassicSimilarity], result of:
              0.09438516 = score(doc=3130,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 3130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3130)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes learning algorithms that acquire syntactic knowledge for a parser from sample sentences entered by users who have no knowledge of the parser or English syntax. It is shown how the subcategorization of verbs, nouns and adjectives can be inferred from sample sentences entered by end users. Then, if the parser fails to parse a sentence, e.g. Peter knows how to read books, because it has limited knowledge or no knowledge at all about 'know', an interface which incorporates the acquisition algorithms can be activated, and 'know' can be defined by entering some sample sentences, one of which can be one which the parser failed to parse
  6. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08505665 = score(doc=3164,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  7. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08505665 = score(doc=4506,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  8. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08505665 = score(doc=6672,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  9. New tools for human translators (1997) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08505665 = score(doc=1179,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  10. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08505665 = score(doc=3117,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  11. ¬Der Student aus dem Computer (2023) 0.02
    0.021264162 = product of:
      0.042528324 = sum of:
        0.042528324 = product of:
          0.08505665 = sum of:
            0.08505665 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08505665 = score(doc=1079,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  12. Montgomery, C.A.: Linguistics and information science (1972) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 6669) [ClassicSimilarity], result of:
              0.08090157 = score(doc=6669,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 6669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6669)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper defines the relationship between linguistics and information science in terms of a common interest in natural language. The notion of automated processing of natural language - i.e., machine simulation of the language processing activities of a human - provides novel possibilities for interaction between linguistics, who have a theoretical interest in such activities, and information scientists, who have more practical goals, e.g. simulating the language processing activities of an indexer with a machine. The concept of a natural language information system is introduces as a framenwork for reviewing automated language processing efforts by computational linguists and information scientists. In terms of this framework, the former have concentrated on automating the operations of the component for content analysis and representation, while the latter have emphasized the data management component. The complementary nature of these developments allows the postulation of an integrated approach to automated language processing. This approach, which is outlined in the final sections of the paper, incorporates current notions in linguistic theory and information science, as well as design features of recent computational linguistic models
  13. Losee, R.M.: Learning syntactic rules and tags with genetic algorithms for information retrieval and filtering : an empirical basis for grammatical rules (1996) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 4068) [ClassicSimilarity], result of:
              0.08090157 = score(doc=4068,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 4068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4068)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The grammars of natural languages may be learned by using genetic algorithms that reproduce and mutate grammatical rules and parts of speech tags, improving the quality of later generations of grammatical components. Syntactic rules are randomly generated and then evolve; those rules resulting in improved parsing and occasionally improved filtering performance are allowed to further propagate. The LUST system learns the characteristics of the language or subkanguage used in document abstracts by learning from the document rankings obtained from the parsed abstracts. Unlike the application of traditional linguistic rules to retrieval and filtering applications, LUST develops grammatical structures and tags without the prior imposition of some common grammatical assumptions (e.g. part of speech assumptions), producing grammars that are empirically based and are optimized for this particular application
  14. Peis, E.; Herrera-Viedma, E.; Herrera, J.C.: On the evaluation of XML documents using Fuzzy linguistic techniques (2003) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 2778) [ClassicSimilarity], result of:
              0.08090157 = score(doc=2778,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 2778, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2778)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Recommender systems evaluate and filter the great amount of information available an the Web to assist people in their search processes. A fuzzy evaluation method of XML documents based an computing with words is presented. Given an XML document type (e.g. scientific article), we consider that its elements are not equally informative. This is indicated by the use of a DTD and defining linguistic importance attributes to the more meaningful elements of the DTD designed. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders an meaningful elements of DTD.
  15. Altmann, E.G.; Cristadoro, G.; Esposti, M.D.: On the origin of long-range correlations in texts (2012) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 330) [ClassicSimilarity], result of:
              0.08090157 = score(doc=330,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Schöneberg, U.; Sperber, W.: POS tagging and its applications for mathematics (2014) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 1748) [ClassicSimilarity], result of:
              0.08090157 = score(doc=1748,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 1748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Content analysis of scientific publications is a nontrivial task, but a useful and important one for scientific information services. In the Gutenberg era it was a domain of human experts; in the digital age many machine-based methods, e.g., graph analysis tools and machine-learning techniques, have been developed for it. Natural Language Processing (NLP) is a powerful machine-learning approach to semiautomatic speech and language processing, which is also applicable to mathematics. The well established methods of NLP have to be adjusted for the special needs of mathematics, in particular for handling mathematical formulae. We demonstrate a mathematics-aware part of speech tagger and give a short overview about our adaptation of NLP methods for mathematical publications. We show the use of the tools developed for key phrase extraction and classification in the database zbMATH.
  17. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.0729057 = score(doc=4483,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  18. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.0729057 = score(doc=4888,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  19. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.0729057 = score(doc=5429,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  20. Karlova-Bourbonus, N.: Automatic detection of contradictions in texts (2018) 0.02
    0.017515704 = product of:
      0.035031408 = sum of:
        0.035031408 = product of:
          0.070062816 = sum of:
            0.070062816 = weight(_text_:e.g in 5976) [ClassicSimilarity], result of:
              0.070062816 = score(doc=5976,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.2994985 = fieldWeight in 5976, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5976)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Natural language contradictions are of complex nature. As will be shown in Chapter 5, the realization of contradictions is not limited to the examples such as Socrates is a man and Socrates is not a man (under the condition that Socrates refers to the same object in the real world), which is discussed by Aristotle (Section 3.1.1). Empirical evidence (see Chapter 5 for more details) shows that only a few contradictions occurring in the real life are of that explicit (prototypical) kind. Rather, con-tradictions make use of a variety of natural language devices such as, e.g., paraphrasing, synonyms and antonyms, passive and active voice, diversity of negation expression, and figurative linguistic means such as idioms, irony, and metaphors. Additionally, the most so-phisticated kind of contradictions, the so-called implicit contradictions, can be found only when applying world knowledge and after conducting a sequence of logical operations such as e.g. in: (1.1) The first prize was given to the experienced grandmaster L. Stein who, in total, col-lected ten points (7 wins and 3 draws). Those familiar with the chess rules know that a chess player gets one point for winning and zero points for losing the game. In case of a draw, each player gets a half point. Built on this idea and by conducting some simple mathematical operations, we can infer that in the case of 7 wins and 3 draws (the second part of the sentence), a player can only collect 8.5 points and not 10 points. Hence, we observe that there is a contradiction between the first and the second parts of the sentence.
    Implicit contradictions will only partially be the subject of the present study, aiming primarily at identifying the realization mechanism and cues (Chapter 5) as well as finding the parts of contradictions by applying the state of the art algorithms for natural language processing without conducting deep meaning processing. Further in focus are the explicit and implicit contradictions that can be detected by means of explicit linguistic, structural, lexical cues, and by conducting some additional processing operations (e.g., counting the sum in order to detect contradictions arising from numerical divergencies). One should note that an additional complexity in finding contradictions can arise in case parts of the contradictions occur on different levels of realization. Thus, a contradiction can be observed on the word- and phrase-level, such as in a married bachelor (for variations of contradictions on lexical level, see Ganeev 2004), on the sentence level - between parts of a sentence or between two or more sentences, or on the text level - between the portions of a text or between the whole texts such as a contradiction between the Bible and the Quran, for example. Only contradictions arising at the level of single sentences occurring in one or more texts, as well as parts of a sentence, will be considered for the purpose of this study. Though the focus of interest will be on single sentences, it will make use of text particularities such as coreference resolution without establishing the referents in the real world. Finally, another aspect to be considered is that parts of the contradictions are not neces-sarily to appear at the same time. They can be separated by many years and centuries with or without time expression making their recognition by human and detection by machine challenging. According to Aristotle's ontological version of the LNC (Section 3.1.1), how-ever, the same time reference is required in order for two statements to be judged as a contradiction. Taking this into account, we set the borders for the study by limiting the ana-lyzed textual data thematically (only nine world events) and temporally (three days after the reported event had happened) (Section 5.1). No sophisticated time processing will thus be conducted.

Years

Languages

  • e 48
  • d 18

Types

  • a 53
  • el 7
  • m 5
  • s 3
  • x 3
  • p 2
  • d 1
  • More… Less…