Search (91 results, page 1 of 5)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × year_i:[1990 TO 2000}
  1. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.10
    0.1026023 = product of:
      0.2052046 = sum of:
        0.058525857 = weight(_text_:data in 6753) [ClassicSimilarity], result of:
          0.058525857 = score(doc=6753,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3952563 = fieldWeight in 6753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
        0.14667875 = sum of:
          0.095923 = weight(_text_:processing in 6753) [ClassicSimilarity], result of:
            0.095923 = score(doc=6753,freq=4.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.5060184 = fieldWeight in 6753, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
          0.050755743 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
            0.050755743 = score(doc=6753,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.30952093 = fieldWeight in 6753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
      0.5 = coord(2/4)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
  2. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.09
    0.09170918 = product of:
      0.18341836 = sum of:
        0.036211025 = weight(_text_:data in 2345) [ClassicSimilarity], result of:
          0.036211025 = score(doc=2345,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 2345, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.14720733 = sum of:
          0.10279606 = weight(_text_:processing in 2345) [ClassicSimilarity], result of:
            0.10279606 = score(doc=2345,freq=6.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.54227555 = fieldWeight in 2345, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
          0.044411276 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
            0.044411276 = score(doc=2345,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
      0.5 = coord(2/4)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  3. WordNet : an electronic lexical database (language, speech and communication) (1998) 0.08
    0.08069316 = product of:
      0.16138633 = sum of:
        0.088698536 = weight(_text_:data in 2434) [ClassicSimilarity], result of:
          0.088698536 = score(doc=2434,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.59902847 = fieldWeight in 2434, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2434)
        0.07268779 = product of:
          0.14537558 = sum of:
            0.14537558 = weight(_text_:processing in 2434) [ClassicSimilarity], result of:
              0.14537558 = score(doc=2434,freq=12.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7668934 = fieldWeight in 2434, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2434)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    LCSH
    Semantics / Data processing
    Lexicology / Data processing
    English language / Data processing
    Subject
    Semantics / Data processing
    Lexicology / Data processing
    English language / Data processing
  4. McKelvie, D.; Brew, C.; Thompson, H.S.: Uisng SGML as a basis for data-intensive natural language processing (1998) 0.06
    0.05647345 = product of:
      0.1129469 = sum of:
        0.062076043 = weight(_text_:data in 3147) [ClassicSimilarity], result of:
          0.062076043 = score(doc=3147,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 3147, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=3147)
        0.05087085 = product of:
          0.1017417 = sum of:
            0.1017417 = weight(_text_:processing in 3147) [ClassicSimilarity], result of:
              0.1017417 = score(doc=3147,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.53671354 = fieldWeight in 3147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3147)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
  5. Ruge, G.: Experiments on linguistically-based term associations (1992) 0.05
    0.052796576 = product of:
      0.10559315 = sum of:
        0.07167925 = weight(_text_:data in 1810) [ClassicSimilarity], result of:
          0.07167925 = score(doc=1810,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.48408815 = fieldWeight in 1810, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=1810)
        0.033913903 = product of:
          0.067827806 = sum of:
            0.067827806 = weight(_text_:processing in 1810) [ClassicSimilarity], result of:
              0.067827806 = score(doc=1810,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.35780904 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1810)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes the hyperterm system REALIST (REtrieval Aids by LInguistic and STatistics) and describes its semantic component. The semantic component of REALIST generates semantic term relations such synonyms. It takes as input a free text data base and generates as output term pairs that are semantically related with respect to their meanings in the data base. In the 1st step an automatic syntactic analysis provides linguistical knowledge about the terms of the data base. In the 2nd step this knowledge is compared by statistical similarity computation. Various experiments with different similarity measures are described
    Source
    Information processing and management. 28(1992) no.3, S.317-332
  6. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.05
    0.050605834 = product of:
      0.20242333 = sum of:
        0.20242333 = sum of:
          0.1516676 = weight(_text_:processing in 7415) [ClassicSimilarity], result of:
            0.1516676 = score(doc=7415,freq=10.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.80008537 = fieldWeight in 7415, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
          0.050755743 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
            0.050755743 = score(doc=7415,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.30952093 = fieldWeight in 7415, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
      0.25 = coord(1/4)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  7. Fox, C.: Lexical analysis and stoplists (1992) 0.05
    0.046219878 = product of:
      0.092439756 = sum of:
        0.058525857 = weight(_text_:data in 3502) [ClassicSimilarity], result of:
          0.058525857 = score(doc=3502,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3952563 = fieldWeight in 3502, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=3502)
        0.033913903 = product of:
          0.067827806 = sum of:
            0.067827806 = weight(_text_:processing in 3502) [ClassicSimilarity], result of:
              0.067827806 = score(doc=3502,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.35780904 = fieldWeight in 3502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3502)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Lexical analysis is a fundamental operation in both query processing and automatic indexing, and filtering stoplist words is an important step in the automatic indexing process. Presents basic algorithms and data structures for lexical analysis, and shows how stoplist word removal can be efficiently incorporated into lexical analysis
    Source
    Information retrieval: data structures and algorithms. Ed.: W.B. Frakes u. R. Baeza-Yates
  8. Mustafa el Hadi, W.; Jouis, C.: Evaluating natural language processing systems as a tool for building terminological databases (1996) 0.04
    0.043804526 = product of:
      0.08760905 = sum of:
        0.036211025 = weight(_text_:data in 5191) [ClassicSimilarity], result of:
          0.036211025 = score(doc=5191,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 5191, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5191)
        0.05139803 = product of:
          0.10279606 = sum of:
            0.10279606 = weight(_text_:processing in 5191) [ClassicSimilarity], result of:
              0.10279606 = score(doc=5191,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.54227555 = fieldWeight in 5191, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5191)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Natural language processing systems use various modules in order to identify terms or concept names and the logico-semantic relations they entertain. The approaches involved in corpus analysis are either based on morpho-syntactic analysis, statistical analysis, semantic analysis, recent connexionist models or any combination of 2 or more of these approaches. This paper will examine the capacity of natural language processing systems to create databases from extensive textual data. We are endeavouring to evaluate the contribution of these systems, their advantages and their shortcomings
  9. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.03
    0.03466491 = product of:
      0.06932982 = sum of:
        0.043894395 = weight(_text_:data in 7862) [ClassicSimilarity], result of:
          0.043894395 = score(doc=7862,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 7862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 7862) [ClassicSimilarity], result of:
              0.05087085 = score(doc=7862,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 7862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7862)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The information retrieval problem is described starting from an analysis of the concepts 'user's information request' and 'information offerings of texts'. It is shown that natural language phrases are a more adequate medium for expressing information requests and information offerings than character string based query and indexing languages complemented by Boolean oprators. The phrases must be represented as concepts to reach a language invariant level for rule based relevance analysis. The special type of representation called advanced thesaurus is used for the semantic representation of natural language phrases and for relevance processing. The analysis of the retrieval problem leads to a symmetric system structure
    Series
    Studies in classification, data analysis, and knowledge organization
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
  10. Ingenerf, J.: Disambiguating lexical meaning : conceptual meta-modelling as a means of controlling semantic language analysis (1994) 0.03
    0.03466491 = product of:
      0.06932982 = sum of:
        0.043894395 = weight(_text_:data in 2572) [ClassicSimilarity], result of:
          0.043894395 = score(doc=2572,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 2572, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2572)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2572) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2572,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2572)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A formal terminology consists of a set of conceptual definitions for the semantical reconstruction of a vocabulary on an intensional level of description. The marking of comparatively abstract concepts as semantic categories and their relational positioning on a meta-level is shown to be instrumental in adapting the conceptual design to domain-specific characteristics. Such a meta-model implies that concepts subsumed by categories may share their compositional possibilities as regards the construction of complex structures. Our approach to language processing leads to an automatic derivation of contextual semantic information about the linguistic expressions under review. This information is encoded by means of values of certain attributes defined in a feature-based grammatical framework. A standard process controlling grammatical analysis, the unification of feature structures, is used for its evaluation. One important example for the usefulness of this approach is the disamgiguation of lexical meaning
    Series
    Studies in classification, data analysis, and knowledge organization
    Source
    Information systems and data analysis: prospects - foundations - applications. Proc. of the 17th Annual Conference of the Gesellschaft für Klassifikation, Kaiserslautern, March 3-5, 1993. Ed.: H.-H. Bock et al
  11. Driscoll, J.R.; Rajala, D.A.; Shaffer, W.H.: ¬The operation and performance of an artificially intelligent keywording system (1991) 0.03
    0.032942846 = product of:
      0.06588569 = sum of:
        0.036211025 = weight(_text_:data in 6681) [ClassicSimilarity], result of:
          0.036211025 = score(doc=6681,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 6681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6681)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 6681) [ClassicSimilarity], result of:
              0.05934933 = score(doc=6681,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 6681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6681)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Presents a new approach to text analysis for automating the key phrase indexing process, using artificial intelligence techniques. This mimics the behaviour of human experts by using a rule base consisting of insertion and deletion rules generated by subject-matter experts. The insertion rules are based on the idea that some phrases found in a text imply or trigger other phrases. The deletion rules apply to semantically ambiguous phrases where text presence alone does not determine appropriateness as a key phrase. The insertion and deletion rules are used to transform a list of found phrases to a list of key phrases for indexing a document. Statistical data are provided to demonstrate the performance of this expert rule based system
    Source
    Information processing and management. 27(1991) no.1, S.43-54
  12. Mock, K.J.; Vemuri, V.R.: Information filtering via hill climbing, WordNet, and index patterns (1997) 0.03
    0.032942846 = product of:
      0.06588569 = sum of:
        0.036211025 = weight(_text_:data in 1517) [ClassicSimilarity], result of:
          0.036211025 = score(doc=1517,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24455236 = fieldWeight in 1517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1517)
        0.029674664 = product of:
          0.05934933 = sum of:
            0.05934933 = weight(_text_:processing in 1517) [ClassicSimilarity], result of:
              0.05934933 = score(doc=1517,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3130829 = fieldWeight in 1517, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1517)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The INFOS (Intelligent News Filtering Organizational System) project is designed to reduce the user's search burden by automatically categorising data as relevant or irrelevant based upon user interests. These predictions are learned automatically based upon features taken from input articles and collaborative features derived from other users. The filtering is performed by a hybrid technique that combines elements of a keyword-based hill climbing method, knowledge-based conceptual representation via WordNet, and partial parsing via index patterns. The hybrid systems integrating all these approaches combines the benefits of each while maintaing robustness and acalability
    Source
    Information processing and management. 33(1997) no.5, S.633-644
  13. Mustafa el Hadi, W.: Automatic term recognition & extraction tools : examining the new interfaces and their effective communication role in LSP discourse (1998) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 67) [ClassicSimilarity], result of:
          0.031038022 = score(doc=67,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 67, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=67)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 67) [ClassicSimilarity], result of:
              0.05087085 = score(doc=67,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 67, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=67)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this paper we will discuss the possibility of reorienting NLP (Natural Language Processing) systems towards the extraction, not only of terms and their semantic relations, but also towards a variety of other uses; the storage, accessing and retrieving of Language for Special Purposes (LSPZ-20) lexical combinations, the provision of contexts and other information on terms through the integration of more interfaces to terminological data-bases, term managing systems and existing NLP systems. The aim of making such interfaces available is to increase the efficiency of the systems and improve the terminology-oriented text analysis. Since automatic term extraction is the backbone of many applications such as machine translation (MT), indexing, technical writing, thesaurus construction and knowledge representation developments in this area will have asignificant impact
  14. Hess, M.: ¬An incrementally extensible document retrieval system based on linguistic and logical principles (1992) 0.03
    0.028236724 = product of:
      0.05647345 = sum of:
        0.031038022 = weight(_text_:data in 2413) [ClassicSimilarity], result of:
          0.031038022 = score(doc=2413,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 2413, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=2413)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 2413) [ClassicSimilarity], result of:
              0.05087085 = score(doc=2413,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 2413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2413)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Most natural language based document retrieval systems use the syntax structures of constituent phrases of documents as index terms. Many of these systems also attempt to reduce the syntactic variability of natural language by some normalisation procedure applied to these syntax structures. However, the retrieval performance of such systems remains fairly disappointing. Some systems therefore use a meaning representation language to index and retrieve documents. In this paper, a system is presented that uses Horn Clause Logic as meaning representation language, employs advanced techniques from Natural Language Processing to achieve incremental extensibility, and uses methods from Logic Programming to achieve robustness in the face of insufficient data. An Incrementally Extensible Document Retrieval System Based on Linguistic and Logical Principles.
  15. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.03
    0.02594015 = product of:
      0.1037606 = sum of:
        0.1037606 = sum of:
          0.05934933 = weight(_text_:processing in 4729) [ClassicSimilarity], result of:
            0.05934933 = score(doc=4729,freq=2.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.3130829 = fieldWeight in 4729, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
          0.044411276 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
            0.044411276 = score(doc=4729,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.2708308 = fieldWeight in 4729, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
      0.25 = coord(1/4)
    
    Abstract
    Reports on an OCLC natural language processing research project to develop methods for identifying terminology in unstructured electronic text, especially material associated with new cultural trends and emerging subjects. Current OCLC production software can only identify single words as indexable terms in full text documents, thus a major goal of the WordSmith project is to develop software that can automatically identify and intelligently organize phrases for uses in database indexes. By analyzing user terminology from local newspapers in the USA, the latest cultural trends and technical developments as well as personal and geographic names have been drawm out. Notes that this new vocabulary can also be mapped into reference works
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  16. Rahmstorf, G.: Concept structures for large vocabularies (1998) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 75) [ClassicSimilarity], result of:
          0.031038022 = score(doc=75,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 75, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=75)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 75) [ClassicSimilarity], result of:
              0.038066804 = score(doc=75,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 75, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=75)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A technology is described which supports the acquisition, visualisation and manipulation of large vocabularies with associated structures. It is used for dictionary production, terminology data bases, thesauri, library classification systems etc. Essential features of the technology are a lexicographic user interface, variable word description, unlimited list of word readings, a concept language, automatic transformations of formulas into graphic structures, structure manipulation operations and retransformation into formulas. The concept language includes notations for undefined concepts. The structure of defined concepts can be constructed interactively. The technology supports the generation of large vocabularies with structures representing word senses. Concept structures and ordering systems for indexing and retrieval can be constructed separately and connected by associating relations.
    Date
    30.12.2001 19:01:22
  17. Pereira, C.N.; Grosz, B.J.: Natural language processing (1994) 0.02
    0.021196188 = product of:
      0.08478475 = sum of:
        0.08478475 = product of:
          0.1695695 = sum of:
            0.1695695 = weight(_text_:processing in 8602) [ClassicSimilarity], result of:
              0.1695695 = score(doc=8602,freq=8.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.8945226 = fieldWeight in 8602, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.078125 = fieldNorm(doc=8602)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Information processing and management 32(1996) no.1, S.122-123 (P.B. Heidorn)
    LCSH
    Natural language processing (Computer science)
    Subject
    Natural language processing (Computer science)
  18. Sheridan, P.; Smeaton, A.F.: ¬The application of morpho-syntactic language processing to effective phrase matching (1992) 0.02
    0.020983158 = product of:
      0.08393263 = sum of:
        0.08393263 = product of:
          0.16786526 = sum of:
            0.16786526 = weight(_text_:processing in 6575) [ClassicSimilarity], result of:
              0.16786526 = score(doc=6575,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.8855322 = fieldWeight in 6575, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6575)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 28(1992) no.3, S.349-369
  19. Chowdhury, G.G.: Natural language processing and information retrieval : pt.1: basic issues; pt.2: major applications (1991) 0.02
    0.018356439 = product of:
      0.073425755 = sum of:
        0.073425755 = product of:
          0.14685151 = sum of:
            0.14685151 = weight(_text_:processing in 3313) [ClassicSimilarity], result of:
              0.14685151 = score(doc=3313,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7746793 = fieldWeight in 3313, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3313)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Reviews the basic issues and procedures involved in natural language processing of textual material for final use in information retrieval. Covers: natural language processing; natural language understanding; syntactic and semantic analysis; parsing; knowledge bases and knowledge representation
  20. Empirical natural language processing (1997) 0.02
    0.017985564 = product of:
      0.071942255 = sum of:
        0.071942255 = product of:
          0.14388451 = sum of:
            0.14388451 = weight(_text_:processing in 3328) [ClassicSimilarity], result of:
              0.14388451 = score(doc=3328,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.7590276 = fieldWeight in 3328, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3328)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    A special section reviewing recent research in empirical methods in speech recognition, syntactic parsing, semantic processing, information extraction and machine translation

Types

  • a 74
  • m 10
  • s 8
  • el 1
  • r 1
  • More… Less…