Search (130 results, page 1 of 7)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.27
    0.2686444 = product of:
      0.47012764 = sum of:
        0.06479234 = product of:
          0.19437702 = sum of:
            0.19437702 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.19437702 = score(doc=562,freq=2.0), product of:
                0.34585547 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04079441 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.19437702 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19437702 = score(doc=562,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.19437702 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.19437702 = score(doc=562,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016581237 = product of:
          0.033162475 = sum of:
            0.033162475 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.033162475 = score(doc=562,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5714286 = coord(4/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.19
    0.19437703 = product of:
      0.4535464 = sum of:
        0.06479234 = product of:
          0.19437702 = sum of:
            0.19437702 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.19437702 = score(doc=862,freq=2.0), product of:
                0.34585547 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04079441 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.19437702 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19437702 = score(doc=862,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.19437702 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.19437702 = score(doc=862,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.42857143 = coord(3/7)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.17
    0.17371511 = product of:
      0.40533528 = sum of:
        0.19437702 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19437702 = score(doc=563,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.19437702 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.19437702 = score(doc=563,freq=2.0), product of:
            0.34585547 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04079441 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.016581237 = product of:
          0.033162475 = sum of:
            0.033162475 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.033162475 = score(doc=563,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Kracht, M.: Mathematical linguistics (2002) 0.03
    0.03233275 = product of:
      0.11316462 = sum of:
        0.0522702 = weight(_text_:case in 3572) [ClassicSimilarity], result of:
          0.0522702 = score(doc=3572,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.29144385 = fieldWeight in 3572, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3572)
        0.06089442 = weight(_text_:studies in 3572) [ClassicSimilarity], result of:
          0.06089442 = score(doc=3572,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.37408823 = fieldWeight in 3572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3572)
      0.2857143 = coord(2/7)
    
    Abstract
    This book studies language(s) and linguistic theories from a mathematical point of view. Starting with ideas already contained in Montague's work, it develops the mathematical foundations of present day linguistics. It equips the reader with all the background necessary to understand and evaluate theories as diverse as Montague Grammar, Categorial Grammar, HPSG and GB. The mathematical tools are mainly from universal algebra and logic, but no particular knowledge is presupposed beyond a certain mathematical sophistication that is in any case needed in order to fruitfully work within these theories. The presentation focuses an abstract mathematical structures and their computational properties, but plenty of examples from different natural languages are provided to illustrate the main concepts and results. In contrast to books devoted to so-called formal language theory, languages are seen here as semiotic systems, that is, as systems of signs. A language sign correlates form with meaning. Using the principle of compositionality it is possible to gain substantial insight into the interaction between form and meaning in natural languages.
    Series
    Studies in generative grammar; 63
  5. Andrushchenko, M.; Sandberg, K.; Turunen, R.; Marjanen, J.; Hatavara, M.; Kurunmäki, J.; Nummenmaa, T.; Hyvärinen, M.; Teräs, K.; Peltonen, J.; Nummenmaa, J.: Using parsed and annotated corpora to analyze parliamentarians' talk in Finland (2022) 0.03
    0.032098964 = product of:
      0.112346366 = sum of:
        0.061601017 = weight(_text_:case in 471) [ClassicSimilarity], result of:
          0.061601017 = score(doc=471,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.34346986 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
        0.050745346 = weight(_text_:studies in 471) [ClassicSimilarity], result of:
          0.050745346 = score(doc=471,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.3117402 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.2857143 = coord(2/7)
    
    Abstract
    We present a search system for grammatically analyzed corpora of Finnish parliamentary records and interviews with former parliamentarians, annotated with metadata of talk structure and involved parliamentarians, and discuss their use through carefully chosen digital humanities case studies. We first introduce the construction, contents, and principles of use of the corpora. Then we discuss the application of the search system and the corpora to study how politicians talk about power, how ideological terms are used in political speech, and how to identify narratives in the data. All case studies stem from questions in the humanities and the social sciences, but rely on the grammatically parsed corpora in both identifying and quantifying passages of interest. Finally, the paper discusses the role of natural language processing methods for questions in the (digital) humanities. It makes the claim that a digital humanities inquiry of parliamentary speech and interviews with politicians cannot only rely on computational humanities modeling, but needs to accommodate a range of perspectives starting with simple searches, quantitative exploration, and ending with modeling. Furthermore, the digital humanities need a more thorough discussion about how the utilization of tools from information science and technologies alter the research questions posed in the humanities.
  6. Melby, A.: Some notes on 'The proper place of men and machines in language translation' (1997) 0.02
    0.02295048 = product of:
      0.08032668 = sum of:
        0.0609819 = weight(_text_:case in 330) [ClassicSimilarity], result of:
          0.0609819 = score(doc=330,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.34001783 = fieldWeight in 330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0546875 = fieldNorm(doc=330)
        0.019344779 = product of:
          0.038689557 = sum of:
            0.038689557 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
              0.038689557 = score(doc=330,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.2708308 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Responds to Kay, M.: The proper place of men and machines in language translation. Examines the appropriateness of machine translation (MT) under the following special circumstances: controlled domain-specific text and high-quality output; controlled domain-specific text and indicative output; dynamic general text and indicative output and dynamic general text and high-quality output. MT is appropriate in the 1st 3 cases but the 4th case requires human translation. Examines how MT research could be more useful for aiding human translation
    Date
    31. 7.1996 9:22:19
  7. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.02
    0.022697395 = product of:
      0.07944088 = sum of:
        0.043558497 = weight(_text_:case in 990) [ClassicSimilarity], result of:
          0.043558497 = score(doc=990,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.24286987 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
        0.03588238 = weight(_text_:studies in 990) [ClassicSimilarity], result of:
          0.03588238 = score(doc=990,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
      0.2857143 = coord(2/7)
    
    Abstract
    Text classification is a common task in data science. Despite the superior performances of deep learning based models in various text classification tasks, their black-box nature poses significant challenges for wide adoption. The knowledge-to-action framework emphasizes several principles concerning the application and use of knowledge, such as ease-of-use, customization, and feedback. With the guidance of the above principles and the properties of interpretable machine learning, we identify the design requirements for and propose an interpretable deep learning (IDeL) based framework for text classification models. IDeL comprises three main components: feature penetration, instance aggregation, and feature perturbation. We evaluate our implementation of the framework with two distinct case studies: fake news detection and social question categorization. The experiment results provide evidence for the efficacy of IDeL components in enhancing the interpretability of text classification models. Moreover, the findings are generalizable across binary and multi-label, multi-class classification problems. The proposed IDeL framework introduce a unique iField perspective for building trusted models in data science by improving the transparency and access to advanced black-box models.
  8. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.02
    0.018157914 = product of:
      0.0635527 = sum of:
        0.034846798 = weight(_text_:case in 3670) [ClassicSimilarity], result of:
          0.034846798 = score(doc=3670,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.1942959 = fieldWeight in 3670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=3670)
        0.028705904 = weight(_text_:studies in 3670) [ClassicSimilarity], result of:
          0.028705904 = score(doc=3670,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.17634688 = fieldWeight in 3670, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=3670)
      0.2857143 = coord(2/7)
    
    Content
    Inhalt: Argument hierarchy and other factors determining argument realization / Dieter Wunderlich - Mismatches in semantic-role hierarchies and the dimensions of role semantics / Beatrice Primus - Thematic roles : universal, particular, and idiosyncratic aspects / Manfred Bierwisch - Experiencer constructions in Daghestanian languages / Bernard Comrie and Helma van den Berg - Clause-level vs. predicate-level linking / Balthasar Bickel - From meaning to syntax semantic roles and beyond / Walter Bisang - Meaning, form and function in basic case roles / Georg Bossong - Semantic macroroles and language processing / Robert D. Van Valin, Jr. - Thematic roles as event structure relations / Maria Mercedes Pinango - Generalised semantic roles and syntactic templates: Anew framework for language comprehension / Ina Bornkessel and Matthias Schlesewsky
    Series
    Trends in linguistics. Studies and monographs; 165
  9. Pepper, S.: ¬The typology and semantics of binominal lexemes : noun-noun compounds and their functional equivalents (2020) 0.02
    0.018157914 = product of:
      0.0635527 = sum of:
        0.034846798 = weight(_text_:case in 104) [ClassicSimilarity], result of:
          0.034846798 = score(doc=104,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.1942959 = fieldWeight in 104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=104)
        0.028705904 = weight(_text_:studies in 104) [ClassicSimilarity], result of:
          0.028705904 = score(doc=104,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.17634688 = fieldWeight in 104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=104)
      0.2857143 = coord(2/7)
    
    Abstract
    The dissertation establishes 'binominal lexeme' as a comparative concept and discusses its cross-linguistic typology and semantics. Informally, a binominal lexeme is a noun-noun compound or functional equivalent; more precisely, it is a lexical item that consists primarily of two thing-morphs between which there exists an unstated semantic relation. Examples of binominals include Mandarin Chinese ?? (tielù) [iron road], French chemin de fer [way of iron] and Russian ???????? ?????? (zeleznaja doroga) [iron:adjz road]. All of these combine a word denoting 'iron' and a word denoting 'road' or 'way' to denote the meaning railway. In each case, the unstated semantic relation is one of composition: a railway is conceptualized as a road that is composed (or made) of iron. However, three different morphosyntactic strategies are employed: compounding, prepositional phrase and relational adjective. This study explores the range of such strategies used by a worldwide sample of 106 languages to express a set of 100 meanings from various semantic domains, resulting in a classification consisting of nine different morphosyntactic types. The semantic relations found in the data are also explored and a classification called the Hatcher-Bourque system is developed that operates at two levels of granularity, together with a tool for classifying binominals, the Bourquifier. The classification is extended to other subfields of language, including metonymy and lexical semantics, and beyond language to the domain of knowledge representation, resulting in a proposal for a general model of associative relations called the PHAB model. The many findings of the research include universals concerning the recruitment of anchoring nominal modification strategies, a method for comparing non-binary typologies, the non-universality (despite its predominance) of compounding, and a scale of frequencies for semantic relations which may provide insights into the associative nature of human thought.
    Imprint
    Oslo : University of Oslo / Faculty of Humanities / Department of Linguistics and Scandinavian Studies
  10. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.02
    0.0174234 = product of:
      0.1219638 = sum of:
        0.1219638 = weight(_text_:case in 8071) [ClassicSimilarity], result of:
          0.1219638 = score(doc=8071,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.68003565 = fieldWeight in 8071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.109375 = fieldNorm(doc=8071)
      0.14285715 = coord(1/7)
    
  11. Working with conceptual structures : contributions to ICCS 2000. 8th International Conference on Conceptual Structures: Logical, Linguistic, and Computational Issues. Darmstadt, August 14-18, 2000 (2000) 0.02
    0.015888177 = product of:
      0.055608615 = sum of:
        0.03049095 = weight(_text_:case in 5089) [ClassicSimilarity], result of:
          0.03049095 = score(doc=5089,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.17000891 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
        0.025117666 = weight(_text_:studies in 5089) [ClassicSimilarity], result of:
          0.025117666 = score(doc=5089,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.15430352 = fieldWeight in 5089, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5089)
      0.2857143 = coord(2/7)
    
    Abstract
    The 8th International Conference on Conceptual Structures - Logical, Linguistic, and Computational Issues (ICCS 2000) brings together a wide range of researchers and practitioners working with conceptual structures. During the last few years, the ICCS conference series has considerably widened its scope on different kinds of conceptual structures, stimulating research across domain boundaries. We hope that this stimulation is further enhanced by ICCS 2000 joining the long tradition of conferences in Darmstadt with extensive, lively discussions. This volume consists of contributions presented at ICCS 2000, complementing the volume "Conceptual Structures: Logical, Linguistic, and Computational Issues" (B. Ganter, G.W. Mineau (Eds.), LNAI 1867, Springer, Berlin-Heidelberg 2000). It contains submissions reviewed by the program committee, and position papers. We wish to express our appreciation to all the authors of submitted papers, to the general chair, the program chair, the editorial board, the program committee, and to the additional reviewers for making ICCS 2000 a valuable contribution in the knowledge processing research field. Special thanks go to the local organizers for making the conference an enjoyable and inspiring event. We are grateful to Darmstadt University of Technology, the Ernst Schröder Center for Conceptual Knowledge Processing, the Center for Interdisciplinary Studies in Technology, the Deutsche Forschungsgemeinschaft, Land Hessen, and NaviCon GmbH for their generous support
    Content
    Concepts & Language: Knowledge organization by procedures of natural language processing. A case study using the method GABEK (J. Zelger, J. Gadner) - Computer aided narrative analysis using conceptual graphs (H. Schärfe, P. 0hrstrom) - Pragmatic representation of argumentative text: a challenge for the conceptual graph approach (H. Irandoust, B. Moulin) - Conceptual graphs as a knowledge representation core in a complex language learning environment (G. Angelova, A. Nenkova, S. Boycheva, T. Nikolov) - Conceptual Modeling and Ontologies: Relationships and actions in conceptual categories (Ch. Landauer, K.L. Bellman) - Concept approximations for formal concept analysis (J. Saquer, J.S. Deogun) - Faceted information representation (U. Priß) - Simple concept graphs with universal quantifiers (J. Tappe) - A framework for comparing methods for using or reusing multiple ontologies in an application (J. van ZyI, D. Corbett) - Designing task/method knowledge-based systems with conceptual graphs (M. Leclère, F.Trichet, Ch. Choquet) - A logical ontology (J. Farkas, J. Sarbo) - Algorithms and Tools: Fast concept analysis (Ch. Lindig) - A framework for conceptual graph unification (D. Corbett) - Visual CP representation of knowledge (H.D. Pfeiffer, R.T. Hartley) - Maximal isojoin for representing software textual specifications and detecting semantic anomalies (Th. Charnois) - Troika: using grids, lattices and graphs in knowledge acquisition (H.S. Delugach, B.E. Lampkin) - Open world theorem prover for conceptual graphs (J.E. Heaton, P. Kocura) - NetCare: a practical conceptual graphs software tool (S. Polovina, D. Strang) - CGWorld - a web based workbench for conceptual graphs management and applications (P. Dobrev, K. Toutanova) - Position papers: The edition project: Peirce's existential graphs (R. Mülller) - Mining association rules using formal concept analysis (N. Pasquier) - Contextual logic summary (R Wille) - Information channels and conceptual scaling (K.E. Wolff) - Spatial concepts - a rule exploration (S. Rudolph) - The TEXT-TO-ONTO learning environment (A. Mädche, St. Staab) - Controlling the semantics of metadata on audio-visual documents using ontologies (Th. Dechilly, B. Bachimont) - Building the ontological foundations of a terminology from natural language to conceptual graphs with Ribosome, a knowledge extraction system (Ch. Jacquelinet, A. Burgun) - CharGer: some lessons learned and new directions (H.S. Delugach) - Knowledge management using conceptual graphs (W.K. Pun)
  12. Campe, P.: Case, semantic roles, and grammatical relations : a comprehensive bibliography (1994) 0.01
    0.014934343 = product of:
      0.1045404 = sum of:
        0.1045404 = weight(_text_:case in 8663) [ClassicSimilarity], result of:
          0.1045404 = score(doc=8663,freq=2.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.5828877 = fieldWeight in 8663, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.09375 = fieldNorm(doc=8663)
      0.14285715 = coord(1/7)
    
  13. Sharada, B.A.: Identification and interpretation of metaphors in document titles (1999) 0.01
    0.014352952 = product of:
      0.10047066 = sum of:
        0.10047066 = weight(_text_:studies in 6792) [ClassicSimilarity], result of:
          0.10047066 = score(doc=6792,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.6172141 = fieldWeight in 6792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.109375 = fieldNorm(doc=6792)
      0.14285715 = coord(1/7)
    
    Source
    Library science with a slant to documentation and information studies. 36(1999) no.1, S.27-33
  14. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.01249148 = product of:
      0.04372018 = sum of:
        0.03404779 = weight(_text_:libraries in 1616) [ClassicSimilarity], result of:
          0.03404779 = score(doc=1616,freq=8.0), product of:
            0.13401186 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.04079441 = queryNorm
            0.25406548 = fieldWeight in 1616, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.009672389 = product of:
          0.019344779 = sum of:
            0.019344779 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.019344779 = score(doc=1616,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  15. McKevitt, P.; Partridge, D.; Wilks, Y.: Why machines should analyse intention in natural language dialogue (1999) 0.01
    0.01230253 = product of:
      0.08611771 = sum of:
        0.08611771 = weight(_text_:studies in 366) [ClassicSimilarity], result of:
          0.08611771 = score(doc=366,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.52904063 = fieldWeight in 366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.09375 = fieldNorm(doc=366)
      0.14285715 = coord(1/7)
    
    Source
    International journal of human-computer studies. 51(1999) no.5, S.947-989
  16. Subbotin, M.M.: Intellektual'nye tekhnologii poiska i obrabotki tekstovoi informatsii kak instrument podderzhki analiticheskoi deyatel'nosti (1999) 0.01
    0.01230253 = product of:
      0.08611771 = sum of:
        0.08611771 = weight(_text_:studies in 415) [ClassicSimilarity], result of:
          0.08611771 = score(doc=415,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.52904063 = fieldWeight in 415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.09375 = fieldNorm(doc=415)
      0.14285715 = coord(1/7)
    
    Footnote
    Übers. des Titels: Application of artificial intelligence tools to processing of text information for analytical studies
  17. Engerer, V.: Exploring interdisciplinary relationships between linguistics and information retrieval from the 1960s to today (2017) 0.01
    0.010560175 = product of:
      0.073921226 = sum of:
        0.073921226 = weight(_text_:case in 3434) [ClassicSimilarity], result of:
          0.073921226 = score(doc=3434,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.41216385 = fieldWeight in 3434, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3434)
      0.14285715 = coord(1/7)
    
    Abstract
    This article explores how linguistics has influenced information retrieval (IR) and attempts to explain the impact of linguistics through an analysis of internal developments in information science generally, and IR in particular. It notes that information science/IR has been evolving from a case science into a fully fledged, "disciplined"/disciplinary science. The article establishes correspondences between linguistics and information science/IR using the three established IR paradigms-physical, cognitive, and computational-as a frame of reference. The current relationship between information science/IR and linguistics is elucidated through discussion of some recent information science publications dealing with linguistic topics and a novel technique, "keyword collocation analysis," is introduced. Insights from interdisciplinarity research and case theory are also discussed. It is demonstrated that the three stages of interdisciplinarity, namely multidisciplinarity, interdisciplinarity (in the narrow sense), and transdisciplinarity, can be linked to different phases of the information science/IR-linguistics relationship and connected to different ways of using linguistic theory in information science and IR.
  18. Hausser, R.: Language and nonlanguage cognition (2021) 0.01
    0.010560175 = product of:
      0.073921226 = sum of:
        0.073921226 = weight(_text_:case in 255) [ClassicSimilarity], result of:
          0.073921226 = score(doc=255,freq=4.0), product of:
            0.17934912 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.04079441 = queryNorm
            0.41216385 = fieldWeight in 255, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=255)
      0.14285715 = coord(1/7)
    
    Abstract
    A basic distinction in agent-based data-driven Database Semantics (DBS) is between language and nonlanguage cognition. Language cognition transfers content between agents by means of raw data. Nonlanguage cognition maps between content and raw data inside the focus agent. {\it Recognition} applies a concept type to raw data, resulting in a concept token. In language recognition, the focus agent (hearer) takes raw language-data (surfaces) produced by another agent (speaker) as input, while nonlanguage recognition takes raw nonlanguage-data as input. In either case, the output is a content which is stored in the agent's onboard short term memory. {\it Action} adapts a concept type to a purpose, resulting in a token. In language action, the focus agent (speaker) produces language-dependent surfaces for another agent (hearer), while nonlanguage action produces intentions for a nonlanguage purpose. In either case, the output is raw action data. As long as the procedural implementation of place holder values works properly, it is compatible with the DBS requirement of input-output equivalence between the natural prototype and the artificial reconstruction.
  19. Satta, G.; Stock, O.: Bidirectional context-free grammar parsing for natural language processing (1994) 0.01
    0.010252109 = product of:
      0.07176476 = sum of:
        0.07176476 = weight(_text_:studies in 1443) [ClassicSimilarity], result of:
          0.07176476 = score(doc=1443,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.44086722 = fieldWeight in 1443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.078125 = fieldNorm(doc=1443)
      0.14285715 = coord(1/7)
    
    Abstract
    While natural langugae is usually analyzed from left to right, bidirectional parsing is very attractive for both theoretical and practical reasons. Describes a formal framework for bidirectional tabular parsing of general context-free languages, and studies some applications to natural language processing
  20. Whitelock, P.; Kilby, K.: Linguistic and computational techniques in machine translation system design : 2nd ed (1995) 0.01
    0.010252109 = product of:
      0.07176476 = sum of:
        0.07176476 = weight(_text_:studies in 1750) [ClassicSimilarity], result of:
          0.07176476 = score(doc=1750,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.44086722 = fieldWeight in 1750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.078125 = fieldNorm(doc=1750)
      0.14285715 = coord(1/7)
    
    Series
    Studies in computational linguistics

Years

Languages

  • e 107
  • d 21
  • m 2
  • f 1
  • ru 1
  • More… Less…

Types

  • a 102
  • m 15
  • el 8
  • s 8
  • x 5
  • p 4
  • b 1
  • d 1
  • More… Less…

Classifications