Search (69 results, page 2 of 4)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Cruz Díaz, N.P.; Maña López, M.J.; Mata Vázquez, J.; Pachón Álvarez, V.: ¬A machine-learning approach to negation and speculation detection in clinical texts (2012) 0.02
    0.017238233 = product of:
      0.034476466 = sum of:
        0.034476466 = product of:
          0.06895293 = sum of:
            0.06895293 = weight(_text_:f in 283) [ClassicSimilarity], result of:
              0.06895293 = score(doc=283,freq=6.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.38137275 = fieldWeight in 283, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=283)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Detecting negative and speculative information is essential in most biomedical text-mining tasks where these language forms are used to express impressions, hypotheses, or explanations of experimental results. Our research is focused on developing a system based on machine-learning techniques that identifies negation and speculation signals and their scope in clinical texts. The proposed system works in two consecutive phases: first, a classifier decides whether each token in a sentence is a negation/speculation signal or not. Then another classifier determines, at sentence level, the tokens which are affected by the signals previously identified. The system was trained and evaluated on the clinical texts of the BioScope corpus, a freely available resource consisting of medical and biological texts: full-length articles, scientific abstracts, and clinical reports. The results obtained by our system were compared with those of two different systems, one based on regular expressions and the other based on machine learning. Our system's results outperformed the results obtained by these two systems. In the signal detection task, the F-score value was 97.3% in negation and 94.9% in speculation. In the scope-finding task, a token was correctly classified if it had been properly identified as being inside or outside the scope of all the negation signals present in the sentence. Our proposal showed an F score of 93.2% in negation and 80.9% in speculation. Additionally, the percentage of correct scopes (those with all their tokens correctly classified) was evaluated obtaining F scores of 90.9% in negation and 71.9% in speculation.
  2. Ahmad, F.; Yusoff, M.; Sembok, T.M.T.: Experiments with a stemming algorithm for Malay words (1996) 0.02
    0.015923997 = product of:
      0.031847995 = sum of:
        0.031847995 = product of:
          0.06369599 = sum of:
            0.06369599 = weight(_text_:f in 6504) [ClassicSimilarity], result of:
              0.06369599 = score(doc=6504,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.35229704 = fieldWeight in 6504, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6504)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.015364703 = product of:
      0.030729406 = sum of:
        0.030729406 = product of:
          0.06145881 = sum of:
            0.06145881 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06145881 = score(doc=1463,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  4. Fattah, M. Abdel; Ren, F.: English-Arabic proper-noun transliteration-pairs creation (2008) 0.01
    0.014074958 = product of:
      0.028149916 = sum of:
        0.028149916 = product of:
          0.05629983 = sum of:
            0.05629983 = weight(_text_:f in 1999) [ClassicSimilarity], result of:
              0.05629983 = score(doc=1999,freq=4.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.31138954 = fieldWeight in 1999, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1999)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Proper nouns may be considered the most important query words in information retrieval. If the two languages use the same alphabet, the same proper nouns can be found in either language. However, if the two languages use different alphabets, the names must be transliterated. Short vowels are not usually marked on Arabic words in almost all Arabic documents (except very important documents like the Muslim and Christian holy books). Moreover, most Arabic words have a syllable consisting of a consonant-vowel combination (CV), which means that most Arabic words contain a short or long vowel between two successive consonant letters. That makes it difficult to create English-Arabic transliteration pairs, since some English letters may not be matched with any romanized Arabic letter. In the present study, we present different approaches for extraction of transliteration proper-noun pairs from parallel corpora based on different similarity measures between the English and romanized Arabic proper nouns under consideration. The strength of our new system is that it works well for low-frequency proper noun pairs. We evaluate the new approaches presented using two different English-Arabic parallel corpora. Most of our results outperform previously published results in terms of precision, recall, and F-Measure.
  5. Gomez, F.: Learning word syntactic subcategorizations interactively (1995) 0.01
    0.0139334975 = product of:
      0.027866995 = sum of:
        0.027866995 = product of:
          0.05573399 = sum of:
            0.05573399 = weight(_text_:f in 3130) [ClassicSimilarity], result of:
              0.05573399 = score(doc=3130,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.3082599 = fieldWeight in 3130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3130)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Szpakowicz, S.; Bond, F.; Nakov, P.; Kim, S.N.: On the semantics of noun compounds (2013) 0.01
    0.0139334975 = product of:
      0.027866995 = sum of:
        0.027866995 = product of:
          0.05573399 = sum of:
            0.05573399 = weight(_text_:f in 120) [ClassicSimilarity], result of:
              0.05573399 = score(doc=120,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.3082599 = fieldWeight in 120, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=120)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Colace, F.; Santo, M. De; Greco, L.; Napoletano, P.: Weighted word pairs for query expansion (2015) 0.01
    0.0139334975 = product of:
      0.027866995 = sum of:
        0.027866995 = product of:
          0.05573399 = sum of:
            0.05573399 = weight(_text_:f in 2687) [ClassicSimilarity], result of:
              0.05573399 = score(doc=2687,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.3082599 = fieldWeight in 2687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2687)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.01
    0.012291762 = product of:
      0.024583524 = sum of:
        0.024583524 = product of:
          0.04916705 = sum of:
            0.04916705 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.04916705 = score(doc=8521,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  9. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.01
    0.012291762 = product of:
      0.024583524 = sum of:
        0.024583524 = product of:
          0.04916705 = sum of:
            0.04916705 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.04916705 = score(doc=6752,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
  10. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.01
    0.012291762 = product of:
      0.024583524 = sum of:
        0.024583524 = product of:
          0.04916705 = sum of:
            0.04916705 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.04916705 = score(doc=6753,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
  11. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.01
    0.012291762 = product of:
      0.024583524 = sum of:
        0.024583524 = product of:
          0.04916705 = sum of:
            0.04916705 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.04916705 = score(doc=7415,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  12. Way, E.C.: Knowledge representation and metaphor (oder: meaning) (1994) 0.01
    0.012291762 = product of:
      0.024583524 = sum of:
        0.024583524 = product of:
          0.04916705 = sum of:
            0.04916705 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
              0.04916705 = score(doc=771,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.30952093 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=771)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Bereits 1991 bei Kluwer publiziert // Rez. in: Knowledge organization 22(1995) no.1, S.48-49 (O. Sechser)
  13. Morris, V.: Automated language identification of bibliographic resources (2020) 0.01
    0.012291762 = product of:
      0.024583524 = sum of:
        0.024583524 = product of:
          0.04916705 = sum of:
            0.04916705 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.04916705 = score(doc=5749,freq=2.0), product of:
                0.15884887 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04536168 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 19:04:22
  14. Martínez, F.; Martín, M.T.; Rivas, V.M.; Díaz, M.C.; Ureña, L.A.: Using neural networks for multiword recognition in IR (2003) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 2777) [ClassicSimilarity], result of:
              0.04777199 = score(doc=2777,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 2777, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2777)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Sebastiani, F.: Machine learning in automated text categorization (2002) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 3389) [ClassicSimilarity], result of:
              0.04777199 = score(doc=3389,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 3389, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3389)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Sebastiani, F.: ¬A tutorial an automated text categorisation (1999) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 3390) [ClassicSimilarity], result of:
              0.04777199 = score(doc=3390,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 3390, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3390)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Galvez, C.; Moya-Anegón, F. de; Solana, V.H.: Term conflation methods in information retrieval : non-linguistic and linguistic approaches (2005) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 4394) [ClassicSimilarity], result of:
              0.04777199 = score(doc=4394,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 4394, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4394)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 5599) [ClassicSimilarity], result of:
              0.04777199 = score(doc=5599,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 5599, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Ahmed, F.; Nürnberger, A.: Evaluation of n-gram conflation approaches for Arabic text retrieval (2009) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 2941) [ClassicSimilarity], result of:
              0.04777199 = score(doc=2941,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 2941, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2941)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.01
    0.011942998 = product of:
      0.023885995 = sum of:
        0.023885995 = product of:
          0.04777199 = sum of:
            0.04777199 = weight(_text_:f in 3296) [ClassicSimilarity], result of:
              0.04777199 = score(doc=3296,freq=2.0), product of:
                0.18080194 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.04536168 = queryNorm
                0.26422277 = fieldWeight in 3296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Types

  • a 59
  • s 4
  • m 3
  • el 2
  • p 2
  • x 1
  • More… Less…