Search (64 results, page 2 of 4)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[2000 TO 2010}
  1. Atlam, E.S.: Similarity measurement using term negative weight and its application to word similarity (2000) 0.01
    0.014976369 = product of:
      0.044929106 = sum of:
        0.044929106 = product of:
          0.08985821 = sum of:
            0.08985821 = weight(_text_:management in 4844) [ClassicSimilarity], result of:
              0.08985821 = score(doc=4844,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.521365 = fieldWeight in 4844, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4844)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 36(2000) no.5, S.717-736
  2. Perez-Carballo, J.; Strzalkowski, T.: Natural language information retrieval : progress report (2000) 0.01
    0.014976369 = product of:
      0.044929106 = sum of:
        0.044929106 = product of:
          0.08985821 = sum of:
            0.08985821 = weight(_text_:management in 6421) [ClassicSimilarity], result of:
              0.08985821 = score(doc=6421,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.521365 = fieldWeight in 6421, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6421)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 36(2000) no.1, S.155-205
  3. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.013855817 = product of:
      0.04156745 = sum of:
        0.04156745 = product of:
          0.0831349 = sum of:
            0.0831349 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.0831349 = score(doc=4888,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 3.2013 14:56:22
  4. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.01
    0.013855817 = product of:
      0.04156745 = sum of:
        0.04156745 = product of:
          0.0831349 = sum of:
            0.0831349 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.0831349 = score(doc=5429,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    c't. 2000, H.22, S.230-231
  5. Mustafa El Hadi, W.: Evaluating human language technology : general applications to information access and management (2002) 0.01
    0.0128368875 = product of:
      0.03851066 = sum of:
        0.03851066 = product of:
          0.07702132 = sum of:
            0.07702132 = weight(_text_:management in 1840) [ClassicSimilarity], result of:
              0.07702132 = score(doc=1840,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.44688427 = fieldWeight in 1840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1840)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  6. Stede, M.: Lexicalization in natural language generation (2002) 0.01
    0.012546628 = product of:
      0.037639882 = sum of:
        0.037639882 = weight(_text_:resources in 4245) [ClassicSimilarity], result of:
          0.037639882 = score(doc=4245,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 4245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4245)
      0.33333334 = coord(1/3)
    
    Abstract
    Natural language generation (NLG), the automatic production of text by Computers, is commonly seen as a process consisting of the following distinct phases: Obviously, choosing words is a central aspect of generatiog language. In which of the these phases it should take place is not entirely clear, however. The decision depends an various factors: what exactly is seen as an individual lexical item; how the relation between word meaning and background knowledge (concepts) is defined; how one accounts for the interactions between individual lexical choices in the Same sentence; what criteria are employed for choosing between similar words; whether or not output is required in one or more languages. This article surveys these issues and the answers that have been proposed in NLG research. For many applications of natural language processing, large scale lexical resources have become available in recent years, such as the WordNet database. In language generation, however, generic lexicons are not in use yet; rather, almost every generation project develops its own format for lexical representations. The reason is that the entries of a generation lexicon need their specific interfaces to the Input representations processed by the generator; lexical semantics in an NLG lexicon needs to be tailored to the Input. Ort the other hand, the large lexicons used for language analysis typically have only very limited semantic information at all. Yet the syntactic behavior of words remains the same regardless of the particular application; thus, it should be possible to build at least parts of generic NLG lexical entries automatically, which could then be used by different systems.
  7. Jurafsky, D.; Martin, J.H.: Speech and language processing : ani ntroduction to natural language processing, computational linguistics and speech recognition (2009) 0.01
    0.012546628 = product of:
      0.037639882 = sum of:
        0.037639882 = weight(_text_:resources in 1081) [ClassicSimilarity], result of:
          0.037639882 = score(doc=1081,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.20165458 = fieldWeight in 1081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
      0.33333334 = coord(1/3)
    
    Abstract
    For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material.
  8. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.01
    0.011546515 = product of:
      0.034639545 = sum of:
        0.034639545 = product of:
          0.06927909 = sum of:
            0.06927909 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06927909 = score(doc=5428,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    c't. 2000, H.22, S.220-229
  9. Bird, S.; Dale, R.; Dorr, B.; Gibson, B.; Joseph, M.; Kan, M.-Y.; Lee, D.; Powley, B.; Radev, D.; Tan, Y.F.: ¬The ACL Anthology Reference Corpus : a reference dataset for bibliographic research in computational linguistics (2008) 0.01
    0.010037302 = product of:
      0.030111905 = sum of:
        0.030111905 = weight(_text_:resources in 2804) [ClassicSimilarity], result of:
          0.030111905 = score(doc=2804,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.16132367 = fieldWeight in 2804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.03125 = fieldNorm(doc=2804)
      0.33333334 = coord(1/3)
    
    Source
    Proceedings of Language Resources and Evaluation Conference (LREC 08). Marrakesh, Morocco, May [http://acl-arc.comp.nus.edu.sg/lrec08.pdf]
  10. Sidhom, S.; Hassoun, M.: Morpho-syntactic parsing for a text mining environment : An NP recognition model for knowledge visualization and information retrieval (2002) 0.01
    0.009077052 = product of:
      0.027231153 = sum of:
        0.027231153 = product of:
          0.054462306 = sum of:
            0.054462306 = weight(_text_:management in 1852) [ClassicSimilarity], result of:
              0.054462306 = score(doc=1852,freq=4.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.31599492 = fieldWeight in 1852, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1852)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Sidhom and Hassoun discuss the crucial role of NLP tools in Knowledge Extraction and Management as well as in the design of Information Retrieval Systems. The authors focus more specifically an the morpho-syntactic issues by describing their morpho-syntactic analysis platform, which has been implemented to cover the automatic indexing and information retrieval topics. To this end they implemented the Cascaded "Augmented Transition Network (ATN)". They used this formalism in order to analyse French text descriptions of Multimedia documents. An implementation of an ATN parsing automaton is briefly described. The Platform in its logical operation is considered as an investigative tool towards the knowledge organization (based an an NP recognition model) and management of multiform e-documents (text, multimedia, audio, image) using their text descriptions.
  11. Jones, I.; Cunliffe, D.; Tudhope, D.: Natural language processing and knowledge organization systems as an aid to retrieval (2004) 0.01
    0.008782639 = product of:
      0.026347917 = sum of:
        0.026347917 = weight(_text_:resources in 2677) [ClassicSimilarity], result of:
          0.026347917 = score(doc=2677,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.14115821 = fieldWeight in 2677, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2677)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper discusses research that employs methods from Natural Language Processing (NLP) in exploiting the intellectual resources of Knowledge Organization Systems (KOS), particularly in the retrieval of information. A technique for the disambiguation of homographs and nominal compounds in free text, where these are known ambiguous terms in the KOS itself, is described. The use of Roget's Thesaurus as an intermediary in the process is also reported. A short review of the relevant literature in the field is given. Design considerations, results and conclusions are presented from the implementation of a prototype system. The linguistic techniques are applied at two complementary levels, namely an a free text string used as an entry point to the KOS, and an the underlying controlled vocabulary itself.
  12. Witschel, H.F.: Global and local resources for peer-to-peer text retrieval (2008) 0.01
    0.008782639 = product of:
      0.026347917 = sum of:
        0.026347917 = weight(_text_:resources in 127) [ClassicSimilarity], result of:
          0.026347917 = score(doc=127,freq=2.0), product of:
            0.18665522 = queryWeight, product of:
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.051133685 = queryNorm
            0.14115821 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.650338 = idf(docFreq=3122, maxDocs=44218)
              0.02734375 = fieldNorm(doc=127)
      0.33333334 = coord(1/3)
    
  13. Fox, B.; Fox, C.J.: Efficient stemmer generation (2002) 0.01
    0.008557925 = product of:
      0.025673775 = sum of:
        0.025673775 = product of:
          0.05134755 = sum of:
            0.05134755 = weight(_text_:management in 2585) [ClassicSimilarity], result of:
              0.05134755 = score(doc=2585,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.29792285 = fieldWeight in 2585, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2585)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 38(2002) no.4, S.547-558
  14. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.01
    0.00808256 = product of:
      0.02424768 = sum of:
        0.02424768 = product of:
          0.04849536 = sum of:
            0.04849536 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.04849536 = score(doc=5483,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10.12.2000 18:22:35
  15. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.00808256 = product of:
      0.02424768 = sum of:
        0.02424768 = product of:
          0.04849536 = sum of:
            0.04849536 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04849536 = score(doc=156,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    8. 3.2007 19:55:22
  16. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.00808256 = product of:
      0.02424768 = sum of:
        0.02424768 = product of:
          0.04849536 = sum of:
            0.04849536 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04849536 = score(doc=3840,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    27. 8.2011 14:22:33
  17. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.00808256 = product of:
      0.02424768 = sum of:
        0.02424768 = product of:
          0.04849536 = sum of:
            0.04849536 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.04849536 = score(doc=4184,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2011 10:38:28
  18. Vilares, J.; Alonso, M.A.; Vilares, M.: Extraction of complex index terms in non-English IR : a shallow parsing based approach (2008) 0.01
    0.007564209 = product of:
      0.022692626 = sum of:
        0.022692626 = product of:
          0.045385253 = sum of:
            0.045385253 = weight(_text_:management in 2107) [ClassicSimilarity], result of:
              0.045385253 = score(doc=2107,freq=4.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2633291 = fieldWeight in 2107, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The performance of information retrieval systems is limited by the linguistic variation present in natural language texts. Word-level natural language processing techniques have been shown to be useful in reducing this variation. In this article, we summarize our work on the extension of these techniques for dealing with phrase-level variation in European languages, taking Spanish as a case in point. We propose the use of syntactic dependencies as complex index terms in an attempt to solve the problems deriving from both syntactic and morpho-syntactic variation and, in this way, to obtain more precise index terms. Such dependencies are obtained through a shallow parser based on cascades of finite-state transducers in order to reduce as far as possible the overhead due to this parsing process. The use of different sources of syntactic information, queries or documents, has been also studied, as has the restriction of the dependencies applied to those obtained from noun phrases. Our approaches have been tested using the CLEF corpus, obtaining consistent improvements with regard to classical word-level non-linguistic techniques. Results show, on the one hand, that syntactic information extracted from documents is more useful than that from queries. On the other hand, it has been demonstrated that by restricting dependencies to those corresponding to noun phrases, important reductions of storage and management costs can be achieved, albeit at the expense of a slight reduction in performance.
    Source
    Information processing and management. 44(2008) no.4, S.1517-1537
  19. Bacchin, M.; Ferro, N.; Melucci, M.: ¬A probabilistic model for stemmer generation (2005) 0.01
    0.0074881846 = product of:
      0.022464553 = sum of:
        0.022464553 = product of:
          0.044929106 = sum of:
            0.044929106 = weight(_text_:management in 1001) [ClassicSimilarity], result of:
              0.044929106 = score(doc=1001,freq=2.0), product of:
                0.17235184 = queryWeight, product of:
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.051133685 = queryNorm
                0.2606825 = fieldWeight in 1001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.3706124 = idf(docFreq=4130, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information processing and management. 41(2005) no.1, S.121-137
  20. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.0069279084 = product of:
      0.020783724 = sum of:
        0.020783724 = product of:
          0.04156745 = sum of:
            0.04156745 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.04156745 = score(doc=4436,freq=2.0), product of:
                0.17906146 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051133685 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    16. 2.2000 14:22:39

Languages

  • e 49
  • d 13
  • m 1
  • More… Less…

Types

  • a 55
  • m 6
  • s 4
  • el 2
  • x 2
  • More… Less…