Search (3 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.02
    0.02435904 = product of:
      0.04871808 = sum of:
        0.04871808 = product of:
          0.09743616 = sum of:
            0.09743616 = weight(_text_:90 in 3457) [ClassicSimilarity], result of:
              0.09743616 = score(doc=3457,freq=2.0), product of:
                0.2733978 = queryWeight, product of:
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.050854117 = queryNorm
                0.3563897 = fieldWeight in 3457, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.376119 = idf(docFreq=555, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3457)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Arabic has a complex structure, which makes it difficult to apply natural language processing (NLP). Much research on Arabic NLP (ANLP) does exist; however, it is not as mature as that of other languages. Finding Arabic roots is an important step toward conducting effective research on most of ANLP applications. The authors have studied and compared six root-finding algorithms with success rates of over 90%. All algorithms of this study did not use the same testing corpus and/or benchmarking measures. They unified the testing process by implementing their own algorithm descriptions and building a corpus out of 3823 triliteral roots, applying 73 triliteral patterns, and with 18 affixes, producing around 27.6 million words. They tested the algorithms with the generated corpus and have obtained interesting results; they offer to share the corpus freely for benchmarking and ANLP research.
  2. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.01
    0.010335046 = product of:
      0.020670092 = sum of:
        0.020670092 = product of:
          0.041340183 = sum of:
            0.041340183 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.041340183 = score(doc=1848,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  3. Fóris, A.: Network theory and terminology (2013) 0.01
    0.008612539 = product of:
      0.017225077 = sum of:
        0.017225077 = product of:
          0.034450155 = sum of:
            0.034450155 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.034450155 = score(doc=1365,freq=2.0), product of:
                0.17808245 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050854117 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48