Search (40 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × year_i:[1990 TO 2000}
  1. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.03
    0.03425208 = product of:
      0.06850416 = sum of:
        0.04778411 = weight(_text_:data in 6753) [ClassicSimilarity], result of:
          0.04778411 = score(doc=6753,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3952563 = fieldWeight in 6753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
        0.020720055 = product of:
          0.04144011 = sum of:
            0.04144011 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.04144011 = score(doc=6753,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
  2. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.02
    0.02384748 = product of:
      0.04769496 = sum of:
        0.02956491 = weight(_text_:data in 2345) [ClassicSimilarity], result of:
          0.02956491 = score(doc=2345,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.24455236 = fieldWeight in 2345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.01813005 = product of:
          0.0362601 = sum of:
            0.0362601 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
              0.0362601 = score(doc=2345,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.2708308 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  3. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.02
    0.020440696 = product of:
      0.04088139 = sum of:
        0.02534135 = weight(_text_:data in 75) [ClassicSimilarity], result of:
          0.02534135 = score(doc=75,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2096163 = fieldWeight in 75, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.015540041 = product of:
          0.031080082 = sum of:
            0.031080082 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
              0.031080082 = score(doc=75,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.23214069 = fieldWeight in 75, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=75)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  4. WordNet : an electronic lexical database (language, speech and communication) (1998) 0.02
    0.018104734 = product of:
      0.072418936 = sum of:
        0.072418936 = weight(_text_:data in 2434) [ClassicSimilarity], result of:
          0.072418936 = score(doc=2434,freq=12.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.59902847 = fieldWeight in 2434, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2434)
      0.25 = coord(1/4)
    
    LCSH
    Semantics / Data processing
    Lexicology / Data processing
    English language / Data processing
    Subject
    Semantics / Data processing
    Lexicology / Data processing
    English language / Data processing
  5. Ruge, G.: Experiments on linguistically-based term associations (1992) 0.01
    0.014630836 = product of:
      0.058523346 = sum of:
        0.058523346 = weight(_text_:data in 1810) [ClassicSimilarity], result of:
          0.058523346 = score(doc=1810,freq=6.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.48408815 = fieldWeight in 1810, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1810)
      0.25 = coord(1/4)
    
    Abstract
    Describes the hyperterm system REALIST (REtrieval Aids by LInguistic and STatistics) and describes its semantic component. The semantic component of REALIST generates semantic term relations such synonyms. It takes as input a free text data base and generates as output term pairs that are semantically related with respect to their meanings in the data base. In the 1st step an automatic syntactic analysis provides linguistical knowledge about the terms of the data base. In the 2nd step this knowledge is compared by statistical similarity computation. Various experiments with different similarity measures are described
  6. McKelvie, D.; Brew, C.; Thompson, H.S.: Uisng SGML as a basis for data-intensive natural language processing (1998) 0.01
    0.012670675 = product of:
      0.0506827 = sum of:
        0.0506827 = weight(_text_:data in 3147) [ClassicSimilarity], result of:
          0.0506827 = score(doc=3147,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.4192326 = fieldWeight in 3147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=3147)
      0.25 = coord(1/4)
    
  7. Owei, V.; Higa, K.: ¬A paradigm for natural language explanation of database queries : a semantic data model approach (1994) 0.01
    0.011946027 = product of:
      0.04778411 = sum of:
        0.04778411 = weight(_text_:data in 8189) [ClassicSimilarity], result of:
          0.04778411 = score(doc=8189,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3952563 = fieldWeight in 8189, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=8189)
      0.25 = coord(1/4)
    
    Abstract
    An interface that provides the user with automatic feedback in the form of an explanation of how the database management system interprets user specified queries. Proposes an approach that exploits the rich semantics of graphical semantic data models to construct a restricted natural language explanation of database queries that are specified in a very high level declarative form. These interpretations of the specified query represent the system's 'understanding' of the query, and are returned to the user for validation
  8. Fox, C.: Lexical analysis and stoplists (1992) 0.01
    0.011946027 = product of:
      0.04778411 = sum of:
        0.04778411 = weight(_text_:data in 3502) [ClassicSimilarity], result of:
          0.04778411 = score(doc=3502,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.3952563 = fieldWeight in 3502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3502)
      0.25 = coord(1/4)
    
    Abstract
    Lexical analysis is a fundamental operation in both query processing and automatic indexing, and filtering stoplist words is an important step in the automatic indexing process. Presents basic algorithms and data structures for lexical analysis, and shows how stoplist word removal can be efficiently incorporated into lexical analysis
    Source
    Information retrieval: data structures and algorithms. Ed.: W.B. Frakes u. R. Baeza-Yates
  9. Ruge, G.; Schwarz, C.: Term association and computational linguistics (1991) 0.01
    0.010558897 = product of:
      0.042235587 = sum of:
        0.042235587 = weight(_text_:data in 2310) [ClassicSimilarity], result of:
          0.042235587 = score(doc=2310,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34936053 = fieldWeight in 2310, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2310)
      0.25 = coord(1/4)
    
    Abstract
    Most systems for term associations are statistically based. In general they exploit term co-occurrences. A critical overview about statistical approaches in this field is given. A new approach on the basis of a linguistic analysis for large amounts of textual data is outlined
  10. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.01
    0.010558897 = product of:
      0.042235587 = sum of:
        0.042235587 = weight(_text_:data in 4236) [ClassicSimilarity], result of:
          0.042235587 = score(doc=4236,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34936053 = fieldWeight in 4236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=4236)
      0.25 = coord(1/4)
    
    Abstract
    Presents an overview of some recent developments in the clause-based content analysis of linguistic data. Introduces network analysis of evaluative texts, for the analysis of cognitive maps, and linguistic content analysis. Focuses on the types of substantive inferences afforded by the three approaches
  11. Griffith, C.: FREESTYLE: LEXIS-NEXIS goes natural (1994) 0.01
    0.010558897 = product of:
      0.042235587 = sum of:
        0.042235587 = weight(_text_:data in 2512) [ClassicSimilarity], result of:
          0.042235587 = score(doc=2512,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.34936053 = fieldWeight in 2512, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2512)
      0.25 = coord(1/4)
    
    Abstract
    Describes FREESTYLE, the associative language search engine, developed by Mead Data Central for its LEXIS/NEXIS online service. The special feature of the associative language in FREESTYLE allows users to enter search descriptions in plain English
  12. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.0725202 = score(doc=3164,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  13. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.0725202 = score(doc=4506,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    8.10.2000 11:52:22
  14. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.0725202 = score(doc=6672,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  15. New tools for human translators (1997) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.0725202 = score(doc=1179,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  16. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.009065025 = product of:
      0.0362601 = sum of:
        0.0362601 = product of:
          0.0725202 = sum of:
            0.0725202 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.0725202 = score(doc=3117,freq=2.0), product of:
                0.13388468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03823278 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    28. 2.1999 10:48:22
  17. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.01
    0.008959521 = product of:
      0.035838082 = sum of:
        0.035838082 = weight(_text_:data in 7862) [ClassicSimilarity], result of:
          0.035838082 = score(doc=7862,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 7862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
      0.25 = coord(1/4)
    
    Series
    Studies in classification, data analysis, and knowledge organization
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
  18. Ingenerf, J.: Disambiguating lexical meaning : conceptual meta-modelling as a means of controlling semantic language analysis (1994) 0.01
    0.008959521 = product of:
      0.035838082 = sum of:
        0.035838082 = weight(_text_:data in 2572) [ClassicSimilarity], result of:
          0.035838082 = score(doc=2572,freq=4.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.29644224 = fieldWeight in 2572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2572)
      0.25 = coord(1/4)
    
    Series
    Studies in classification, data analysis, and knowledge organization
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
  19. Rahmstorf, G.: Compositional semantics and concept representation (1991) 0.01
    0.008447117 = product of:
      0.03378847 = sum of:
        0.03378847 = weight(_text_:data in 6673) [ClassicSimilarity], result of:
          0.03378847 = score(doc=6673,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 6673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=6673)
      0.25 = coord(1/4)
    
    Source
    Classification, data analysis, and knowledge organization: models and methods with applications. Proc. of the 14th annual conf. of the Gesellschaft für Klassifikation, Univ. of Marburg, 12.-14.3.1990. Ed.: H.-H. Bock u. P. Ihm
  20. Frakes, W.B.: Stemming algorithms (1992) 0.01
    0.008447117 = product of:
      0.03378847 = sum of:
        0.03378847 = weight(_text_:data in 3503) [ClassicSimilarity], result of:
          0.03378847 = score(doc=3503,freq=2.0), product of:
            0.120893985 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03823278 = queryNorm
            0.2794884 = fieldWeight in 3503, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3503)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: data structures and algorithms. Ed.: W.B. Frakes u. R. Baeza-Yates