Search (202 results, page 2 of 11)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[1990 TO 2000}
  1. Stede, M.: Lexicalization in natural language generation : a survey (1994/95) 0.00
    0.003827074 = product of:
      0.007654148 = sum of:
        0.007654148 = product of:
          0.015308296 = sum of:
            0.015308296 = weight(_text_:a in 1913) [ClassicSimilarity], result of:
              0.015308296 = score(doc=1913,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.28826174 = fieldWeight in 1913, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In natural language generation, a meaning representation of some kind is successively transformed into a sentence or a text. Naturally, a central subtask of this problem is the choice of words, or lexicalization. Proposes 4 major issues that determine how a generator tackles lexicalization, and surveys the contributions that research have made to them. Identifies open problems, and sketches a possible direction for research
    Type
    a
  2. Czejdo. B.D.; Tucci, R.P.: ¬A dataflow graphical language for database applications (1994) 0.00
    0.0037819599 = product of:
      0.0075639198 = sum of:
        0.0075639198 = product of:
          0.0151278395 = sum of:
            0.0151278395 = weight(_text_:a in 559) [ClassicSimilarity], result of:
              0.0151278395 = score(doc=559,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.28486365 = fieldWeight in 559, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses a graphical language for information retrieval and processing. A lot of recent activity has occured in the area of improving access to database systems. However, current results are restricted to simple interfacing of database systems. Proposes a graphical language for specifying complex applications
    Type
    a
  3. Lawson, V.; Vasconcellos, M.: Forty ways to skin a cat : users report on machine translation (1994) 0.00
    0.003515392 = product of:
      0.007030784 = sum of:
        0.007030784 = product of:
          0.014061568 = sum of:
            0.014061568 = weight(_text_:a in 6956) [ClassicSimilarity], result of:
              0.014061568 = score(doc=6956,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.26478532 = fieldWeight in 6956, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6956)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the most extensive survey of machine translation (MT) use ever performed, explores the responeses to a questionnaire survey of 40 MT users concerning their experiences
    Type
    a
  4. McKelvie, D.; Brew, C.; Thompson, H.S.: Uisng SGML as a basis for data-intensive natural language processing (1998) 0.00
    0.003515392 = product of:
      0.007030784 = sum of:
        0.007030784 = product of:
          0.014061568 = sum of:
            0.014061568 = weight(_text_:a in 3147) [ClassicSimilarity], result of:
              0.014061568 = score(doc=3147,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.26478532 = fieldWeight in 3147, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3147)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Addresses advantages and disadvantages of SGML-approach compared with a non-SGML database aproach
    Type
    a
  5. Rodriguez, H.; Climent, S.; Vossen, P.; Bloksma, L.; Peters, W.; Alonge, A.; Bertagna, F.; Roventini, A.: ¬The top-down strategy for building EuroWordNet : vocabulary coverage, base concept and top ontology (1998) 0.00
    0.003515392 = product of:
      0.007030784 = sum of:
        0.007030784 = product of:
          0.014061568 = sum of:
            0.014061568 = weight(_text_:a in 6441) [ClassicSimilarity], result of:
              0.014061568 = score(doc=6441,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.26478532 = fieldWeight in 6441, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6441)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Ruge, G.; Schwarz, C.: Term association and computational linguistics (1991) 0.00
    0.0033826875 = product of:
      0.006765375 = sum of:
        0.006765375 = product of:
          0.01353075 = sum of:
            0.01353075 = weight(_text_:a in 2310) [ClassicSimilarity], result of:
              0.01353075 = score(doc=2310,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25478977 = fieldWeight in 2310, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most systems for term associations are statistically based. In general they exploit term co-occurrences. A critical overview about statistical approaches in this field is given. A new approach on the basis of a linguistic analysis for large amounts of textual data is outlined
    Type
    a
  7. Driscoll, J.R.; Rajala, D.A.; Shaffer, W.H.: ¬The operation and performance of an artificially intelligent keywording system (1991) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 6681) [ClassicSimilarity], result of:
              0.01339476 = score(doc=6681,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 6681, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6681)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a new approach to text analysis for automating the key phrase indexing process, using artificial intelligence techniques. This mimics the behaviour of human experts by using a rule base consisting of insertion and deletion rules generated by subject-matter experts. The insertion rules are based on the idea that some phrases found in a text imply or trigger other phrases. The deletion rules apply to semantically ambiguous phrases where text presence alone does not determine appropriateness as a key phrase. The insertion and deletion rules are used to transform a list of found phrases to a list of key phrases for indexing a document. Statistical data are provided to demonstrate the performance of this expert rule based system
    Type
    a
  8. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 8071) [ClassicSimilarity], result of:
              0.01339476 = score(doc=8071,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 8071, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8071)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Fellbaum, C.: ¬A semantic network of English : the mother of all WordNets (1998) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 6416) [ClassicSimilarity], result of:
              0.01339476 = score(doc=6416,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 6416, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  10. Sharada, B.A.: Identification and interpretation of metaphors in document titles (1999) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 6792) [ClassicSimilarity], result of:
              0.01339476 = score(doc=6792,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 6792, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6792)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library science with a slant to documentation and information studies. 36(1999) no.1, S.27-33
    Type
    a
  11. Jacquemin, C.: What is the tree that we see through the window : a linguistic approach to windowing and term variation (1996) 0.00
    0.0031324127 = product of:
      0.0062648254 = sum of:
        0.0062648254 = product of:
          0.012529651 = sum of:
            0.012529651 = weight(_text_:a in 5578) [ClassicSimilarity], result of:
              0.012529651 = score(doc=5578,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23593865 = fieldWeight in 5578, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Provides a linguistic approach to text windowing through an extraction of term variants with the help of a partial parser. The syntactic grounding of the method ensures ehat words observed within restricted spans are lexically related and that spurious word cooccurrences are rules out with a good level of confidence. The system is computationally tractable on large corpora and large lists of terms. Gives illustrative examples of term variation from a large medical corpus. An experimental evaluation of the method shows that only a small proportion of co-occuring words are lexically related and motivates the call for natural language parsing techniques in text windowing
    Type
    a
  12. Ekmekcioglu, F.C.; Lynch, M.F.; Willet, P.: Development and evaluation of conflation techniques for the implementation of a document retrieval system for Turkish text databases (1995) 0.00
    0.0031324127 = product of:
      0.0062648254 = sum of:
        0.0062648254 = product of:
          0.012529651 = sum of:
            0.012529651 = weight(_text_:a in 5797) [ClassicSimilarity], result of:
              0.012529651 = score(doc=5797,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23593865 = fieldWeight in 5797, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5797)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considers language processing techniques necessary for the implementation of a document retrieval system for Turkish text databases. Introduces the main characteristics of the Turkish language. Discusses the development of a stopword list and the evaluation of a stemming algorithm that takes account of the language's morphological structure. A 2 level description of Turkish morphology developed in Bilkent University, Ankara, is incorporated into a morphological parser, PC-KIMMO, to carry out stemming in Turkish databases. Describes the evaluation of string similarity measures - n-gram matching techniques - for Turkish. Reports experiments on 6 different Turkish text corpora
    Type
    a
  13. Kraaij, W.; Pohlmann, R.: Evaluation of a Dutch stemming algorithm (1995) 0.00
    0.0031324127 = product of:
      0.0062648254 = sum of:
        0.0062648254 = product of:
          0.012529651 = sum of:
            0.012529651 = weight(_text_:a in 5798) [ClassicSimilarity], result of:
              0.012529651 = score(doc=5798,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23593865 = fieldWeight in 5798, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5798)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A stemming algorithm enables the recall of text retrieval systems to be enhanced. Describes the development of a Dutch version of the Porter stemming algorithm. The stemmer was evaluated using a method drawn from Paice. The evaluation method is based on a list of groups of morphologically related words. Ideally, each group must be stemmed to the same root. The result of applying the stemmer to these groups of words is used to calculate the understemming and overstemming index. These parameters and the diversity of stem group categories that could be generated from the CELEX database enabled a careful analysis of the effects of each stemming rule. The test suite is highly suited to qualitative comparison of different versions of stemmers
    Type
    a
  14. Dork, B.J.; Garman, J.; Weinberg, A.: From syntactic encodings to thematic roles : building lexical entries for interlingual MT (1994/95) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 4074) [ClassicSimilarity], result of:
              0.012177675 = score(doc=4074,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 4074, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4074)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aims to construct large scale lexicons for interlingual machine translation of English, Arabic, Korean, and Spanish. Describes techniques that predict salient linguistic features of a non English word using the features of its English gloss in a bilingual dictionary. While not exact, owing to inexact glosses and language to language variations, these techniques can augment an existing dictionary with reasonable accuracy, thus savionf significant time. Conducts 2 experiments that demonstrate the value of these techniques. The 1st tests the feasibility of building a database of thematic grids for over 6500 Arabic verbs based on a mapping between English glosses and the syntactic codes in Longman's Dictionary of Contemporary English. The 2nd experiment tested the automatic classification of verbs into a richer semantic typology from which a more refined set of thematic grids derived. While human intervention will always be necessary for the construction of a semantic classification from LODOCE such intervention is significantly minimized as more knowledge about the syntax semantics relation is introduced
    Type
    a
  15. Ingenerf, J.: Disambiguating lexical meaning : conceptual meta-modelling as a means of controlling semantic language analysis (1994) 0.00
    0.0030444188 = product of:
      0.0060888375 = sum of:
        0.0060888375 = product of:
          0.012177675 = sum of:
            0.012177675 = weight(_text_:a in 2572) [ClassicSimilarity], result of:
              0.012177675 = score(doc=2572,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22931081 = fieldWeight in 2572, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2572)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A formal terminology consists of a set of conceptual definitions for the semantical reconstruction of a vocabulary on an intensional level of description. The marking of comparatively abstract concepts as semantic categories and their relational positioning on a meta-level is shown to be instrumental in adapting the conceptual design to domain-specific characteristics. Such a meta-model implies that concepts subsumed by categories may share their compositional possibilities as regards the construction of complex structures. Our approach to language processing leads to an automatic derivation of contextual semantic information about the linguistic expressions under review. This information is encoded by means of values of certain attributes defined in a feature-based grammatical framework. A standard process controlling grammatical analysis, the unification of feature structures, is used for its evaluation. One important example for the usefulness of this approach is the disamgiguation of lexical meaning
    Type
    a
  16. Ruge, G.; Schwarz, C.: Linguistically based term associations : a new semantic component for a hyperterm system (1990) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 5544) [ClassicSimilarity], result of:
              0.012102271 = score(doc=5544,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 5544, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    REALIST (Retrieval Aids by Linguistics and Statistics) is a tool which supplies the user of free text information retrieval systems with information about the terms in the databases. The resulting tables of terms show term relations according to their meaning in the database and form a kind of 'road map' of the database to give the user orientation help
    Type
    a
  17. Hsinchun, C.: Knowledge-based document retrieval framework and design (1992) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 6686) [ClassicSimilarity], result of:
              0.012102271 = score(doc=6686,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 6686, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents research on the design of knowledge-based document retrieval systems in which a semantic network was adopted to represent subject knowledge and classification scheme knowledge and experts' search strategies and user modelling capability were modelled as procedural knowledge. These functionalities were incorporated into a prototype knowledge-based retrieval system, Metacat. Describes a system, the design of which was based on the blackboard architecture, which was able to create a user profile, identify task requirements, suggest heuristics-based search strategies, perform semantic-based search assistance, and assist online query refinement
    Type
    a
  18. Owei, V.; Higa, K.: ¬A paradigm for natural language explanation of database queries : a semantic data model approach (1994) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 8189) [ClassicSimilarity], result of:
              0.012102271 = score(doc=8189,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 8189, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8189)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An interface that provides the user with automatic feedback in the form of an explanation of how the database management system interprets user specified queries. Proposes an approach that exploits the rich semantics of graphical semantic data models to construct a restricted natural language explanation of database queries that are specified in a very high level declarative form. These interpretations of the specified query represent the system's 'understanding' of the query, and are returned to the user for validation
    Type
    a
  19. Senez, D.: Developments in Systran (1995) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 8546) [ClassicSimilarity], result of:
              0.012102271 = score(doc=8546,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 8546, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Systran, the European Commission's multilingual machine translation system, is a fast service which is available to all Commission officials. The computer cannot match the skills of the professional translator, who must continue to be responsible for all texts which are legally binding or which are for publication. But machine translation can deal, in a matter of minutes, with short-lived documents, designed, say, for information or preparatory work, and which are required urgently. It can also give a broad view of a paper in an unfamiliar language, so that an official can decide how much, if any, of it needs to go to translators
    Type
    a
  20. Gopestake, A.: Acquisition of lexical translation relations from MRDS (1994/95) 0.00
    0.0030255679 = product of:
      0.0060511357 = sum of:
        0.0060511357 = product of:
          0.012102271 = sum of:
            0.012102271 = weight(_text_:a in 4073) [ClassicSimilarity], result of:
              0.012102271 = score(doc=4073,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.22789092 = fieldWeight in 4073, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4073)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a methodology for extracting information about lexical translation equivalences from the machine readable versions of conventional dictionaries (MRDs), and describes a series of experiments on semi automatic construction of a linked multilingual lexical knowledge base for English, Dutch and Spanish. Discusses the advantage and limitations of using MRDs that this has revealed, and some strategies developed to cover gaps where direct translation can be found
    Type
    a

Languages