Search (23 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.02
    0.02240327 = product of:
      0.04480654 = sum of:
        0.04480654 = product of:
          0.06720981 = sum of:
            0.030349022 = weight(_text_:j in 1848) [ClassicSimilarity], result of:
              0.030349022 = score(doc=1848,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
            0.036860786 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.036860786 = score(doc=1848,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  2. Wu, H.; He, J.; Pei, Y.: Scientific impact at the topic level : a case study in computational linguistics (2010) 0.02
    0.019017797 = product of:
      0.038035594 = sum of:
        0.038035594 = product of:
          0.057053387 = sum of:
            0.021646196 = weight(_text_:h in 4103) [ClassicSimilarity], result of:
              0.021646196 = score(doc=4103,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.19214681 = fieldWeight in 4103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4103)
            0.035407193 = weight(_text_:j in 4103) [ClassicSimilarity], result of:
              0.035407193 = score(doc=4103,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.24574696 = fieldWeight in 4103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4103)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  3. Korman, D.Z.; Mack, E.; Jett, J.; Renear, A.H.: Defining textual entailment (2018) 0.02
    0.018862724 = product of:
      0.03772545 = sum of:
        0.03772545 = product of:
          0.056588173 = sum of:
            0.026239151 = weight(_text_:h in 4284) [ClassicSimilarity], result of:
              0.026239151 = score(doc=4284,freq=4.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.2329171 = fieldWeight in 4284, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
            0.030349022 = weight(_text_:j in 4284) [ClassicSimilarity], result of:
              0.030349022 = score(doc=4284,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 4284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Textual entailment is a relationship that obtains between fragments of text when one fragment in some sense implies the other fragment. The automation of textual entailment recognition supports a wide variety of text-based tasks, including information retrieval, information extraction, question answering, text summarization, and machine translation. Much ingenuity has been devoted to developing algorithms for identifying textual entailments, but relatively little to saying what textual entailment actually is. This article is a review of the logical and philosophical issues involved in providing an adequate definition of textual entailment. We show that many natural definitions of textual entailment are refuted by counterexamples, including the most widely cited definition of Dagan et al. We then articulate and defend the following revised definition: T textually entails H?=?df typically, a human reading T would be justified in inferring the proposition expressed by H from the proposition expressed by T. We also show that textual entailment is context-sensitive, nontransitive, and nonmonotonic.
  4. Soo, J.; Frieder, O.: On searching misspelled collections (2015) 0.01
    0.0067442274 = product of:
      0.013488455 = sum of:
        0.013488455 = product of:
          0.040465362 = sum of:
            0.040465362 = weight(_text_:j in 1862) [ClassicSimilarity], result of:
              0.040465362 = score(doc=1862,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.28085366 = fieldWeight in 1862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  5. Fóris, A.: Network theory and terminology (2013) 0.01
    0.0051195538 = product of:
      0.0102391075 = sum of:
        0.0102391075 = product of:
          0.030717323 = sum of:
            0.030717323 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.030717323 = score(doc=1365,freq=2.0), product of:
                0.1587864 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04534384 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48
  6. Dolamic, L.; Savoy, J.: Retrieval effectiveness of machine translated queries (2010) 0.01
    0.0050581703 = product of:
      0.010116341 = sum of:
        0.010116341 = product of:
          0.030349022 = sum of:
            0.030349022 = weight(_text_:j in 4102) [ClassicSimilarity], result of:
              0.030349022 = score(doc=4102,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 4102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4102)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  7. Panicheva, P.; Cardiff, J.; Rosso, P.: Identifying subjective statements in news titles using a personal sense annotation framework (2013) 0.01
    0.0050581703 = product of:
      0.010116341 = sum of:
        0.010116341 = product of:
          0.030349022 = sum of:
            0.030349022 = weight(_text_:j in 968) [ClassicSimilarity], result of:
              0.030349022 = score(doc=968,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=968)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.01
    0.0050581703 = product of:
      0.010116341 = sum of:
        0.010116341 = product of:
          0.030349022 = sum of:
            0.030349022 = weight(_text_:j in 2697) [ClassicSimilarity], result of:
              0.030349022 = score(doc=2697,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 2697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2697)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. Li, N.; Sun, J.: Improving Chinese term association from the linguistic perspective (2017) 0.01
    0.0050581703 = product of:
      0.010116341 = sum of:
        0.010116341 = product of:
          0.030349022 = sum of:
            0.030349022 = weight(_text_:j in 3381) [ClassicSimilarity], result of:
              0.030349022 = score(doc=3381,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 3381, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3381)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.01
    0.0050581703 = product of:
      0.010116341 = sum of:
        0.010116341 = product of:
          0.030349022 = sum of:
            0.030349022 = weight(_text_:j in 5219) [ClassicSimilarity], result of:
              0.030349022 = score(doc=5219,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.21064025 = fieldWeight in 5219, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5219)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  11. Anizi, M.; Dichy, J.: Improving information retrieval in Arabic through a multi-agent approach and a rich lexical resource (2011) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 4738) [ClassicSimilarity], result of:
              0.02529085 = score(doc=4738,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 4738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4738)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Cruz Díaz, N.P.; Maña López, M.J.; Mata Vázquez, J.; Pachón Álvarez, V.: ¬A machine-learning approach to negation and speculation detection in clinical texts (2012) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 283) [ClassicSimilarity], result of:
              0.02529085 = score(doc=283,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=283)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Carrillo-de-Albornoz, J.; Plaza, L.: ¬An emotion-based model of negation, intensifiers, and modality for polarity and intensity classification (2013) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 1005) [ClassicSimilarity], result of:
              0.02529085 = score(doc=1005,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 1005, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1005)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Malo, P.; Sinha, A.; Korhonen, P.; Wallenius, J.; Takala, P.: Good debt or bad debt : detecting semantic orientations in economic texts (2014) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 1226) [ClassicSimilarity], result of:
              0.02529085 = score(doc=1226,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 1226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1226)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Savoy, J.: Text representation strategies : an example with the State of the union addresses (2016) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 3042) [ClassicSimilarity], result of:
              0.02529085 = score(doc=3042,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 3042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3042)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 3682) [ClassicSimilarity], result of:
              0.02529085 = score(doc=3682,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Reyes Ayala, B.; Knudson, R.; Chen, J.; Cao, G.; Wang, X.: Metadata records machine translation combining multi-engine outputs with limited parallel data (2018) 0.00
    0.0042151418 = product of:
      0.0084302835 = sum of:
        0.0084302835 = product of:
          0.02529085 = sum of:
            0.02529085 = weight(_text_:j in 4010) [ClassicSimilarity], result of:
              0.02529085 = score(doc=4010,freq=2.0), product of:
                0.14407988 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.04534384 = queryNorm
                0.17553353 = fieldWeight in 4010, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4010)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Keselman, A.; Rosemblat, G.; Kilicoglu, H.; Fiszman, M.; Jin, H.; Shin, D.; Rindflesch, T.C.: Adapting semantic natural language processing technology to address information overload in influenza epidemic management (2010) 0.00
    0.0036443267 = product of:
      0.0072886534 = sum of:
        0.0072886534 = product of:
          0.02186596 = sum of:
            0.02186596 = weight(_text_:h in 1312) [ClassicSimilarity], result of:
              0.02186596 = score(doc=1312,freq=4.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.1940976 = fieldWeight in 1312, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1312)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.00
    0.0036443267 = product of:
      0.0072886534 = sum of:
        0.0072886534 = product of:
          0.02186596 = sum of:
            0.02186596 = weight(_text_:h in 5043) [ClassicSimilarity], result of:
              0.02186596 = score(doc=5043,freq=4.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.1940976 = fieldWeight in 5043, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5043)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  20. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.00
    0.0036076994 = product of:
      0.007215399 = sum of:
        0.007215399 = product of:
          0.021646196 = sum of:
            0.021646196 = weight(_text_:h in 2764) [ClassicSimilarity], result of:
              0.021646196 = score(doc=2764,freq=2.0), product of:
                0.11265446 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.04534384 = queryNorm
                0.19214681 = fieldWeight in 2764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2764)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The ACL Anthology is a large collection of research papers in computational linguistics. Citation data were obtained using text extraction from a collection of PDF files with significant manual postprocessing performed to clean up the results. Manual annotation of the references was then performed to complete the citation network. We analyzed the networks of paper citations, author citations, and author collaborations in an attempt to identify the most central papers and authors. The analysis includes general network statistics, PageRank, metrics across publication years and venues, the impact factor and h-index, as well as other measures.