Search (232 results, page 2 of 12)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[1990 TO 2000}
  1. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.01
    0.00638597 = product of:
      0.01277194 = sum of:
        0.01277194 = product of:
          0.02554388 = sum of:
            0.02554388 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.02554388 = score(doc=190,freq=2.0), product of:
                0.13204344 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037706986 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 4.2007 10:04:22
  2. MacLeod, C.; Grisham, R.; Meyer, A.: COMLERX syntax : a large syntactic dictionary for natural language processing (1998) 0.00
    0.003357759 = product of:
      0.006715518 = sum of:
        0.006715518 = product of:
          0.013431036 = sum of:
            0.013431036 = weight(_text_:a in 3167) [ClassicSimilarity], result of:
              0.013431036 = score(doc=3167,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.3089162 = fieldWeight in 3167, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3167)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  3. Sembok, T.M.T.; Rijsbergen, C.J. van: SILOL: a simple logical-linguistic document retrieval system (1990) 0.00
    0.0033233196 = product of:
      0.006646639 = sum of:
        0.006646639 = product of:
          0.013293278 = sum of:
            0.013293278 = weight(_text_:a in 6684) [ClassicSimilarity], result of:
              0.013293278 = score(doc=6684,freq=18.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.30574775 = fieldWeight in 6684, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6684)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes a system called SILOL which is based on a logical-linguistic model of document retrieval systems. SILOL uses a shallow semantic translation of natural language texts into a first order predicate representation in performing a document indexing and retrieval process. Some preliminary experiments have been carried out to test the retrieval effectiveness of this system. The results obtained show improvements in the level of retrieval effectiveness, which demonstrate that the approach of using a semantic theory of natural language and logic in document retrieval systems is a valid one
    Type
    a
  4. Campe, P.: Case, semantic roles, and grammatical relations : a comprehensive bibliography (1994) 0.00
    0.0033233196 = product of:
      0.006646639 = sum of:
        0.006646639 = product of:
          0.013293278 = sum of:
            0.013293278 = weight(_text_:a in 8663) [ClassicSimilarity], result of:
              0.013293278 = score(doc=8663,freq=8.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.30574775 = fieldWeight in 8663, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8663)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Contains references to more than 6000 publications with a subject and a language index as well as a guide to the relevant languages and language families
  5. Solvberg, I.; Nordbo, I.; Aamodt, A.: Knowledge-based information retrieval (1991/92) 0.00
    0.0031332558 = product of:
      0.0062665115 = sum of:
        0.0062665115 = product of:
          0.012533023 = sum of:
            0.012533023 = weight(_text_:a in 546) [ClassicSimilarity], result of:
              0.012533023 = score(doc=546,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.28826174 = fieldWeight in 546, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Stede, M.: Lexicalization in natural language generation : a survey (1994/95) 0.00
    0.0031332558 = product of:
      0.0062665115 = sum of:
        0.0062665115 = product of:
          0.012533023 = sum of:
            0.012533023 = weight(_text_:a in 1913) [ClassicSimilarity], result of:
              0.012533023 = score(doc=1913,freq=16.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.28826174 = fieldWeight in 1913, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In natural language generation, a meaning representation of some kind is successively transformed into a sentence or a text. Naturally, a central subtask of this problem is the choice of words, or lexicalization. Proposes 4 major issues that determine how a generator tackles lexicalization, and surveys the contributions that research have made to them. Identifies open problems, and sketches a possible direction for research
    Type
    a
  7. Czejdo. B.D.; Tucci, R.P.: ¬A dataflow graphical language for database applications (1994) 0.00
    0.0030963202 = product of:
      0.0061926404 = sum of:
        0.0061926404 = product of:
          0.012385281 = sum of:
            0.012385281 = weight(_text_:a in 559) [ClassicSimilarity], result of:
              0.012385281 = score(doc=559,freq=10.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.28486365 = fieldWeight in 559, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=559)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Discusses a graphical language for information retrieval and processing. A lot of recent activity has occured in the area of improving access to database systems. However, current results are restricted to simple interfacing of database systems. Proposes a graphical language for specifying complex applications
    Type
    a
  8. Lawson, V.; Vasconcellos, M.: Forty ways to skin a cat : users report on machine translation (1994) 0.00
    0.0028780792 = product of:
      0.0057561584 = sum of:
        0.0057561584 = product of:
          0.011512317 = sum of:
            0.011512317 = weight(_text_:a in 6956) [ClassicSimilarity], result of:
              0.011512317 = score(doc=6956,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.26478532 = fieldWeight in 6956, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6956)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the most extensive survey of machine translation (MT) use ever performed, explores the responeses to a questionnaire survey of 40 MT users concerning their experiences
    Type
    a
  9. McKelvie, D.; Brew, C.; Thompson, H.S.: Uisng SGML as a basis for data-intensive natural language processing (1998) 0.00
    0.0028780792 = product of:
      0.0057561584 = sum of:
        0.0057561584 = product of:
          0.011512317 = sum of:
            0.011512317 = weight(_text_:a in 3147) [ClassicSimilarity], result of:
              0.011512317 = score(doc=3147,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.26478532 = fieldWeight in 3147, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3147)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Addresses advantages and disadvantages of SGML-approach compared with a non-SGML database aproach
    Type
    a
  10. Rodriguez, H.; Climent, S.; Vossen, P.; Bloksma, L.; Peters, W.; Alonge, A.; Bertagna, F.; Roventini, A.: ¬The top-down strategy for building EuroWordNet : vocabulary coverage, base concept and top ontology (1998) 0.00
    0.0028780792 = product of:
      0.0057561584 = sum of:
        0.0057561584 = product of:
          0.011512317 = sum of:
            0.011512317 = weight(_text_:a in 6441) [ClassicSimilarity], result of:
              0.011512317 = score(doc=6441,freq=6.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.26478532 = fieldWeight in 6441, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6441)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  11. Ruge, G.; Schwarz, C.: Term association and computational linguistics (1991) 0.00
    0.0027694327 = product of:
      0.0055388655 = sum of:
        0.0055388655 = product of:
          0.011077731 = sum of:
            0.011077731 = weight(_text_:a in 2310) [ClassicSimilarity], result of:
              0.011077731 = score(doc=2310,freq=8.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25478977 = fieldWeight in 2310, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most systems for term associations are statistically based. In general they exploit term co-occurrences. A critical overview about statistical approaches in this field is given. A new approach on the basis of a linguistic analysis for large amounts of textual data is outlined
    Type
    a
  12. Driscoll, J.R.; Rajala, D.A.; Shaffer, W.H.: ¬The operation and performance of an artificially intelligent keywording system (1991) 0.00
    0.0027415988 = product of:
      0.0054831975 = sum of:
        0.0054831975 = product of:
          0.010966395 = sum of:
            0.010966395 = weight(_text_:a in 6681) [ClassicSimilarity], result of:
              0.010966395 = score(doc=6681,freq=16.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25222903 = fieldWeight in 6681, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6681)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Presents a new approach to text analysis for automating the key phrase indexing process, using artificial intelligence techniques. This mimics the behaviour of human experts by using a rule base consisting of insertion and deletion rules generated by subject-matter experts. The insertion rules are based on the idea that some phrases found in a text imply or trigger other phrases. The deletion rules apply to semantically ambiguous phrases where text presence alone does not determine appropriateness as a key phrase. The insertion and deletion rules are used to transform a list of found phrases to a list of key phrases for indexing a document. Statistical data are provided to demonstrate the performance of this expert rule based system
    Type
    a
  13. Haas, S.W.: ¬A feasibility study of the case hierarchy model for the construction and porting of natural language interfaces (1990) 0.00
    0.0027415988 = product of:
      0.0054831975 = sum of:
        0.0054831975 = product of:
          0.010966395 = sum of:
            0.010966395 = weight(_text_:a in 8071) [ClassicSimilarity], result of:
              0.010966395 = score(doc=8071,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25222903 = fieldWeight in 8071, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=8071)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  14. Sager, J.A.: ¬A practical course in terminology processing : with a bibliography by Blaise Nkwenti-Azeh (1990) 0.00
    0.0027415988 = product of:
      0.0054831975 = sum of:
        0.0054831975 = product of:
          0.010966395 = sum of:
            0.010966395 = weight(_text_:a in 5028) [ClassicSimilarity], result of:
              0.010966395 = score(doc=5028,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25222903 = fieldWeight in 5028, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5028)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Fellbaum, C.: ¬A semantic network of English : the mother of all WordNets (1998) 0.00
    0.0027415988 = product of:
      0.0054831975 = sum of:
        0.0054831975 = product of:
          0.010966395 = sum of:
            0.010966395 = weight(_text_:a in 6416) [ClassicSimilarity], result of:
              0.010966395 = score(doc=6416,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25222903 = fieldWeight in 6416, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6416)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  16. Sharada, B.A.: Identification and interpretation of metaphors in document titles (1999) 0.00
    0.0027415988 = product of:
      0.0054831975 = sum of:
        0.0054831975 = product of:
          0.010966395 = sum of:
            0.010966395 = weight(_text_:a in 6792) [ClassicSimilarity], result of:
              0.010966395 = score(doc=6792,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.25222903 = fieldWeight in 6792, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6792)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Library science with a slant to documentation and information studies. 36(1999) no.1, S.27-33
    Type
    a
  17. Jacquemin, C.: What is the tree that we see through the window : a linguistic approach to windowing and term variation (1996) 0.00
    0.0025645308 = product of:
      0.0051290616 = sum of:
        0.0051290616 = product of:
          0.010258123 = sum of:
            0.010258123 = weight(_text_:a in 5578) [ClassicSimilarity], result of:
              0.010258123 = score(doc=5578,freq=14.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.23593865 = fieldWeight in 5578, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Provides a linguistic approach to text windowing through an extraction of term variants with the help of a partial parser. The syntactic grounding of the method ensures ehat words observed within restricted spans are lexically related and that spurious word cooccurrences are rules out with a good level of confidence. The system is computationally tractable on large corpora and large lists of terms. Gives illustrative examples of term variation from a large medical corpus. An experimental evaluation of the method shows that only a small proportion of co-occuring words are lexically related and motivates the call for natural language parsing techniques in text windowing
    Type
    a
  18. Ekmekcioglu, F.C.; Lynch, M.F.; Willet, P.: Development and evaluation of conflation techniques for the implementation of a document retrieval system for Turkish text databases (1995) 0.00
    0.0025645308 = product of:
      0.0051290616 = sum of:
        0.0051290616 = product of:
          0.010258123 = sum of:
            0.010258123 = weight(_text_:a in 5797) [ClassicSimilarity], result of:
              0.010258123 = score(doc=5797,freq=14.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.23593865 = fieldWeight in 5797, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5797)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Considers language processing techniques necessary for the implementation of a document retrieval system for Turkish text databases. Introduces the main characteristics of the Turkish language. Discusses the development of a stopword list and the evaluation of a stemming algorithm that takes account of the language's morphological structure. A 2 level description of Turkish morphology developed in Bilkent University, Ankara, is incorporated into a morphological parser, PC-KIMMO, to carry out stemming in Turkish databases. Describes the evaluation of string similarity measures - n-gram matching techniques - for Turkish. Reports experiments on 6 different Turkish text corpora
    Type
    a
  19. Kraaij, W.; Pohlmann, R.: Evaluation of a Dutch stemming algorithm (1995) 0.00
    0.0025645308 = product of:
      0.0051290616 = sum of:
        0.0051290616 = product of:
          0.010258123 = sum of:
            0.010258123 = weight(_text_:a in 5798) [ClassicSimilarity], result of:
              0.010258123 = score(doc=5798,freq=14.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.23593865 = fieldWeight in 5798, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5798)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A stemming algorithm enables the recall of text retrieval systems to be enhanced. Describes the development of a Dutch version of the Porter stemming algorithm. The stemmer was evaluated using a method drawn from Paice. The evaluation method is based on a list of groups of morphologically related words. Ideally, each group must be stemmed to the same root. The result of applying the stemmer to these groups of words is used to calculate the understemming and overstemming index. These parameters and the diversity of stem group categories that could be generated from the CELEX database enabled a careful analysis of the effects of each stemming rule. The test suite is highly suited to qualitative comparison of different versions of stemmers
    Type
    a
  20. Greengrass, M.: Conflation methods for searching databases of Latin text (1996) 0.00
    0.0025645308 = product of:
      0.0051290616 = sum of:
        0.0051290616 = product of:
          0.010258123 = sum of:
            0.010258123 = weight(_text_:a in 6987) [ClassicSimilarity], result of:
              0.010258123 = score(doc=6987,freq=14.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.23593865 = fieldWeight in 6987, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6987)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes the results of a project to develop conflation tools for searching databases of Latin text. Reports on the results of a questionnaire sent to 64 users of Latin text retrieval systems. Describes a Latin stemming algorithm that uses a simple longest match with some recoding but differs from most stemmers in its use of 2 separate suffix dictionaries for processing query and database words. Describes a retrieval system in which a user inputs the principal component of their search term, these components are stemmed and the resulting stems matched against the noun based and verb based stem dictionaries. Evaluates the system, describing its limitations, and a more complex system

Languages

Types

  • a 202
  • m 17
  • s 12
  • el 5
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Classifications