Search (78 results, page 2 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.02125607 = product of:
      0.04251214 = sum of:
        0.04251214 = product of:
          0.08502428 = sum of:
            0.08502428 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08502428 = score(doc=3117,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  2. ¬Der Student aus dem Computer (2023) 0.02
    0.02125607 = product of:
      0.04251214 = sum of:
        0.04251214 = product of:
          0.08502428 = sum of:
            0.08502428 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08502428 = score(doc=1079,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  3. Zhang, X: Rough set theory based automatic text categorization (2005) 0.02
    0.018656416 = product of:
      0.03731283 = sum of:
        0.03731283 = product of:
          0.07462566 = sum of:
            0.07462566 = weight(_text_:2003 in 2822) [ClassicSimilarity], result of:
              0.07462566 = score(doc=2822,freq=2.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.3836027 = fieldWeight in 2822, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
  4. Mustafa el Hadi, W.: Human language technology and its role in information access and management (2003) 0.02
    0.01843649 = product of:
      0.03687298 = sum of:
        0.03687298 = product of:
          0.07374596 = sum of:
            0.07374596 = weight(_text_:2003 in 5524) [ClassicSimilarity], result of:
              0.07374596 = score(doc=5524,freq=5.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.3790807 = fieldWeight in 5524, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5524)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 37(2003) nos.1/2, S.131-151
    Year
    2003
  5. Rindflesch, T.C.; Fizsman, M.: The interaction of domain knowledge and linguistic structure in natural language processing : interpreting hypernymic propositions in biomedical text (2003) 0.02
    0.01843649 = product of:
      0.03687298 = sum of:
        0.03687298 = product of:
          0.07374596 = sum of:
            0.07374596 = weight(_text_:2003 in 2097) [ClassicSimilarity], result of:
              0.07374596 = score(doc=2097,freq=5.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.3790807 = fieldWeight in 2097, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2097)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of Biomedical Informatics, 36(2003) no.6), S.462-477
    Year
    2003
  6. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.01821949 = product of:
      0.03643898 = sum of:
        0.03643898 = product of:
          0.07287796 = sum of:
            0.07287796 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07287796 = score(doc=4483,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  7. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.01821949 = product of:
      0.03643898 = sum of:
        0.03643898 = product of:
          0.07287796 = sum of:
            0.07287796 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07287796 = score(doc=4888,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  8. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.01821949 = product of:
      0.03643898 = sum of:
        0.03643898 = product of:
          0.07287796 = sum of:
            0.07287796 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07287796 = score(doc=5429,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  9. Mustafa el Hadi, W.: Terminology & information retrieval : new tools for new needs. Integration of knowledge across boundaries (2003) 0.02
    0.017137012 = product of:
      0.034274023 = sum of:
        0.034274023 = product of:
          0.068548046 = sum of:
            0.068548046 = weight(_text_:2003 in 2688) [ClassicSimilarity], result of:
              0.068548046 = score(doc=2688,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.35236156 = fieldWeight in 2688, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2688)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  10. Herrera-Viedma, E.; Cordón, O.; Herrera, J.C.; Luqe, M.: ¬An IRS based on multi-granular lnguistic information (2003) 0.02
    0.017137012 = product of:
      0.034274023 = sum of:
        0.034274023 = product of:
          0.068548046 = sum of:
            0.068548046 = weight(_text_:2003 in 2740) [ClassicSimilarity], result of:
              0.068548046 = score(doc=2740,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.35236156 = fieldWeight in 2740, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2740)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  11. Martínez, F.; Martín, M.T.; Rivas, V.M.; Díaz, M.C.; Ureña, L.A.: Using neural networks for multiword recognition in IR (2003) 0.02
    0.017137012 = product of:
      0.034274023 = sum of:
        0.034274023 = product of:
          0.068548046 = sum of:
            0.068548046 = weight(_text_:2003 in 2777) [ClassicSimilarity], result of:
              0.068548046 = score(doc=2777,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.35236156 = fieldWeight in 2777, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2777)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  12. Peis, E.; Herrera-Viedma, E.; Herrera, J.C.: On the evaluation of XML documents using Fuzzy linguistic techniques (2003) 0.02
    0.017137012 = product of:
      0.034274023 = sum of:
        0.034274023 = product of:
          0.068548046 = sum of:
            0.068548046 = weight(_text_:2003 in 2778) [ClassicSimilarity], result of:
              0.068548046 = score(doc=2778,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.35236156 = fieldWeight in 2778, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2778)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  13. Kishida, K.: Term disambiguation techniques based on target document collection for cross-language information retrieval : an empirical comparison of performance between techniques (2007) 0.02
    0.016490098 = product of:
      0.032980196 = sum of:
        0.032980196 = product of:
          0.06596039 = sum of:
            0.06596039 = weight(_text_:2003 in 897) [ClassicSimilarity], result of:
              0.06596039 = score(doc=897,freq=4.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.3390601 = fieldWeight in 897, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=897)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dictionary-based query translation for cross-language information retrieval often yields various translation candidates having different meanings for a source term in the query. This paper examines methods for solving the ambiguity of translations based on only the target document collections. First, we discuss two kinds of disambiguation technique: (1) one is a method using term co-occurrence statistics in the collection, and (2) a technique based on pseudo-relevance feedback. Next, these techniques are empirically compared using the CLEF 2003 test collection for German to Italian bilingual searches, which are executed by using English language as a pivot. The experiments showed that a variation of term co-occurrence based techniques, in which the best sequence algorithm for selecting translations is used with the Cosine coefficient, is dominant, and that the PRF method shows comparable high search performance, although statistical tests did not sufficiently support these conclusions. Furthermore, we repeat the same experiments for the case of French to Italian (pivot) and English to Italian (non-pivot) searches on the same CLEF 2003 test collection in order to verity our findings. Again, similar results were observed except that the Dice coefficient outperforms slightly the Cosine coefficient in the case of disambiguation based on term co-occurrence for English to Italian searches.
  14. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.015182908 = product of:
      0.030365815 = sum of:
        0.030365815 = product of:
          0.06073163 = sum of:
            0.06073163 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06073163 = score(doc=1463,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  15. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.015182908 = product of:
      0.030365815 = sum of:
        0.030365815 = product of:
          0.06073163 = sum of:
            0.06073163 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06073163 = score(doc=5428,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  16. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.015182908 = product of:
      0.030365815 = sum of:
        0.030365815 = product of:
          0.06073163 = sum of:
            0.06073163 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.06073163 = score(doc=1693,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
  17. Yang, C.C.; Li, K.W.: Automatic construction of English/Chinese parallel corpora (2003) 0.01
    0.014749191 = product of:
      0.029498382 = sum of:
        0.029498382 = product of:
          0.058996763 = sum of:
            0.058996763 = weight(_text_:2003 in 1683) [ClassicSimilarity], result of:
              0.058996763 = score(doc=1683,freq=5.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.30326456 = fieldWeight in 1683, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1683)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.8, S.730-742
    Year
    2003
  18. Chowdhury, G.G.: Natural language processing (2002) 0.01
    0.013992311 = product of:
      0.027984623 = sum of:
        0.027984623 = product of:
          0.055969246 = sum of:
            0.055969246 = weight(_text_:2003 in 4284) [ClassicSimilarity], result of:
              0.055969246 = score(doc=4284,freq=2.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.28770202 = fieldWeight in 4284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 37(2003), S.51-90
  19. Ghazzawi, N.; Robichaud, B.; Drouin, P.; Sadat, F.: Automatic extraction of specialized verbal units (2018) 0.01
    0.013992311 = product of:
      0.027984623 = sum of:
        0.027984623 = product of:
          0.055969246 = sum of:
            0.055969246 = weight(_text_:2003 in 4094) [ClassicSimilarity], result of:
              0.055969246 = score(doc=4094,freq=2.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.28770202 = fieldWeight in 4094, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4094)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a methodology for the automatic extraction of specialized Arabic, English and French verbs of the field of computing. Since nominal terms are predominant in terminology, our interest is to explore to what extent verbs can also be part of a terminological analysis. Hence, our objective is to verify how an existing extraction tool will perform when it comes to specialized verbs in a given specialized domain. Furthermore, we want to investigate any particularities that a language can represent regarding verbal terms from the automatic extraction perspective. Our choice to operate on three different languages reflects our desire to see whether the chosen tool can perform better on one language compared to the others. Moreover, given that Arabic is a morphologically rich and complex language, we consider investigating the results yielded by the extraction tool. The extractor used for our experiment is TermoStat (Drouin 2003). So far, our results show that the extraction of verbs of computing represents certain differences in terms of quality and particularities of these units in this specialized domain between the languages under question.
  20. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.01
    0.012146326 = product of:
      0.024292652 = sum of:
        0.024292652 = product of:
          0.048585303 = sum of:
            0.048585303 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.048585303 = score(doc=8521,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19

Years

Languages

  • e 59
  • d 19

Types

  • a 64
  • m 7
  • el 5
  • s 4
  • p 2
  • x 2
  • d 1
  • More… Less…