Search (104 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.12
    0.12172589 = sum of:
      0.08709657 = product of:
        0.26128972 = sum of:
          0.26128972 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.26128972 = score(doc=862,freq=2.0), product of:
              0.4649134 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.054837555 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
      0.03462931 = product of:
        0.06925862 = sum of:
          0.06925862 = weight(_text_:work in 862) [ClassicSimilarity], result of:
            0.06925862 = score(doc=862,freq=4.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.3440991 = fieldWeight in 862, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.109385766 = sum of:
      0.08709657 = product of:
        0.26128972 = sum of:
          0.26128972 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.26128972 = score(doc=562,freq=2.0), product of:
              0.4649134 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.054837555 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.022289194 = product of:
        0.04457839 = sum of:
          0.04457839 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04457839 = score(doc=562,freq=2.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.06
    0.06161146 = product of:
      0.12322292 = sum of:
        0.12322292 = sum of:
          0.07068679 = weight(_text_:work in 2541) [ClassicSimilarity], result of:
            0.07068679 = score(doc=2541,freq=6.0), product of:
              0.20127523 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.054837555 = queryNorm
              0.35119468 = fieldWeight in 2541, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.052536134 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.052536134 = score(doc=2541,freq=4.0), product of:
              0.19203177 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.054837555 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(1/2)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  4. ISO/DIS 1087-2:1994-09: Terminology work, vocabulary : pt.2: computational aids (1994) 0.03
    0.032648828 = product of:
      0.065297656 = sum of:
        0.065297656 = product of:
          0.13059531 = sum of:
            0.13059531 = weight(_text_:work in 2912) [ClassicSimilarity], result of:
              0.13059531 = score(doc=2912,freq=2.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.6488395 = fieldWeight in 2912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.125 = fieldNorm(doc=2912)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Warner, A.J.: Natural language processing (1987) 0.03
    0.029718926 = product of:
      0.059437852 = sum of:
        0.059437852 = product of:
          0.118875705 = sum of:
            0.118875705 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.118875705 = score(doc=337,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  6. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.02600406 = product of:
      0.05200812 = sum of:
        0.05200812 = product of:
          0.10401624 = sum of:
            0.10401624 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.10401624 = score(doc=3164,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  7. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.02600406 = product of:
      0.05200812 = sum of:
        0.05200812 = product of:
          0.10401624 = sum of:
            0.10401624 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.10401624 = score(doc=4506,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  8. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.02600406 = product of:
      0.05200812 = sum of:
        0.05200812 = product of:
          0.10401624 = sum of:
            0.10401624 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.10401624 = score(doc=6672,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  9. New tools for human translators (1997) 0.03
    0.02600406 = product of:
      0.05200812 = sum of:
        0.05200812 = product of:
          0.10401624 = sum of:
            0.10401624 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.10401624 = score(doc=1179,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  10. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.02600406 = product of:
      0.05200812 = sum of:
        0.05200812 = product of:
          0.10401624 = sum of:
            0.10401624 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.10401624 = score(doc=3117,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  11. ¬Der Student aus dem Computer (2023) 0.03
    0.02600406 = product of:
      0.05200812 = sum of:
        0.05200812 = product of:
          0.10401624 = sum of:
            0.10401624 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.10401624 = score(doc=1079,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  12. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.08915678 = score(doc=4483,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  13. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.08915678 = score(doc=4888,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  14. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.022289194 = product of:
      0.04457839 = sum of:
        0.04457839 = product of:
          0.08915678 = sum of:
            0.08915678 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.08915678 = score(doc=5429,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  15. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.02
    0.020200431 = product of:
      0.040400863 = sum of:
        0.040400863 = product of:
          0.080801725 = sum of:
            0.080801725 = weight(_text_:work in 267) [ClassicSimilarity], result of:
              0.080801725 = score(doc=267,freq=4.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.40144894 = fieldWeight in 267, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=267)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  16. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.07429732 = score(doc=1463,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  17. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.07429732 = score(doc=5428,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  18. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.01857433 = product of:
      0.03714866 = sum of:
        0.03714866 = product of:
          0.07429732 = sum of:
            0.07429732 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.07429732 = score(doc=1693,freq=2.0), product of:
                0.19203177 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.054837555 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
  19. ¬The semantics of relationships : an interdisciplinary perspective (2002) 0.02
    0.017671697 = product of:
      0.035343394 = sum of:
        0.035343394 = product of:
          0.07068679 = sum of:
            0.07068679 = weight(_text_:work in 1430) [ClassicSimilarity], result of:
              0.07068679 = score(doc=1430,freq=6.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.35119468 = fieldWeight in 1430, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1430)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Work on relationships takes place in many communities, including, among others, data modeling, knowledge representation, natural language processing, linguistics, and information retrieval. Unfortunately, continued disciplinary splintering and specialization keeps any one person from being familiar with the full expanse of that work. By including contributions form experts in a variety of disciplines and backgrounds, this volume demonstrates both the parallels that inform work on relationships across a number of fields and the singular emphases that have yet to be fully embraced, The volume is organized into 3 parts: (1) Types of relationships (2) Relationships in knowledge representation and reasoning (3) Applications of relationships
  20. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.02
    0.017671697 = product of:
      0.035343394 = sum of:
        0.035343394 = product of:
          0.07068679 = sum of:
            0.07068679 = weight(_text_:work in 1529) [ClassicSimilarity], result of:
              0.07068679 = score(doc=1529,freq=6.0), product of:
                0.20127523 = queryWeight, product of:
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.054837555 = queryNorm
                0.35119468 = fieldWeight in 1529, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.6703904 = idf(docFreq=3060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1529)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
    Classification
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
    RVK
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware

Years

Languages

  • e 86
  • d 17
  • chi 1
  • More… Less…

Types

  • a 84
  • el 12
  • m 9
  • s 4
  • p 3
  • x 3
  • d 1
  • n 1
  • More… Less…

Classifications