Search (42 results, page 1 of 3)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Computerlinguistik"
  1. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.014024019 = product of:
      0.028048038 = sum of:
        0.028048038 = product of:
          0.084144115 = sum of:
            0.084144115 = weight(_text_:c in 2995) [ClassicSimilarity], result of:
              0.084144115 = score(doc=2995,freq=4.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.5389696 = fieldWeight in 2995, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2995)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  2. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.01
    0.008414412 = product of:
      0.016828824 = sum of:
        0.016828824 = product of:
          0.050486468 = sum of:
            0.050486468 = weight(_text_:c in 5219) [ClassicSimilarity], result of:
              0.050486468 = score(doc=5219,freq=4.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.32338172 = fieldWeight in 5219, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5219)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  3. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.0081761535 = product of:
      0.016352307 = sum of:
        0.016352307 = product of:
          0.04905692 = sum of:
            0.04905692 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.04905692 = score(doc=1490,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:30:24
  4. Becks, D.; Schulz, J.M.: Domänenübergreifende Phrasenextraktion mithilfe einer lexikonunabhängigen Analysekomponente (2010) 0.01
    0.007933183 = product of:
      0.015866365 = sum of:
        0.015866365 = product of:
          0.047599096 = sum of:
            0.047599096 = weight(_text_:c in 4661) [ClassicSimilarity], result of:
              0.047599096 = score(doc=4661,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.3048872 = fieldWeight in 4661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4661)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  5. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.01
    0.0061321156 = product of:
      0.012264231 = sum of:
        0.012264231 = product of:
          0.03679269 = sum of:
            0.03679269 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03679269 = score(doc=563,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    10. 1.2013 19:22:47
  6. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.01
    0.0061321156 = product of:
      0.012264231 = sum of:
        0.012264231 = product of:
          0.03679269 = sum of:
            0.03679269 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.03679269 = score(doc=1848,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  7. Vasalou, A.; Gill, A.J.; Mazanderani, F.; Papoutsi, C.; Joinson, A.: Privacy dictionary : a new resource for the automated content analysis of privacy (2011) 0.01
    0.0059498874 = product of:
      0.011899775 = sum of:
        0.011899775 = product of:
          0.035699323 = sum of:
            0.035699323 = weight(_text_:c in 4915) [ClassicSimilarity], result of:
              0.035699323 = score(doc=4915,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 4915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4915)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Ramisch, C.; Villavicencio, A.; Kordoni, V.: Introduction to the special issue on multiword expressions : from theory to practice and use (2013) 0.01
    0.0059498874 = product of:
      0.011899775 = sum of:
        0.011899775 = product of:
          0.035699323 = sum of:
            0.035699323 = weight(_text_:c in 1124) [ClassicSimilarity], result of:
              0.035699323 = score(doc=1124,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 1124, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1124)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  9. Rosemblat, G.; Resnick, M.P.; Auston, I.; Shin, D.; Sneiderman, C.; Fizsman, M.; Rindflesch, T.C.: Extending SemRep to the public health domain (2013) 0.01
    0.0059498874 = product of:
      0.011899775 = sum of:
        0.011899775 = product of:
          0.035699323 = sum of:
            0.035699323 = weight(_text_:c in 2096) [ClassicSimilarity], result of:
              0.035699323 = score(doc=2096,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 2096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2096)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Anguiano Peña, G.; Naumis Peña, C.: Method for selecting specialized terms from a general language corpus (2015) 0.01
    0.0059498874 = product of:
      0.011899775 = sum of:
        0.011899775 = product of:
          0.035699323 = sum of:
            0.035699323 = weight(_text_:c in 2196) [ClassicSimilarity], result of:
              0.035699323 = score(doc=2196,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.22866541 = fieldWeight in 2196, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2196)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  11. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2015) 0.01
    0.0051443353 = product of:
      0.010288671 = sum of:
        0.010288671 = product of:
          0.030866012 = sum of:
            0.030866012 = weight(_text_:h in 1172) [ClassicSimilarity], result of:
              0.030866012 = score(doc=1172,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.27449545 = fieldWeight in 1172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1172)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.01
    0.0051443353 = product of:
      0.010288671 = sum of:
        0.010288671 = product of:
          0.030866012 = sum of:
            0.030866012 = weight(_text_:h in 1810) [ClassicSimilarity], result of:
              0.030866012 = score(doc=1810,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.27449545 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1810)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Snajder, J.: Distributional semantics of multi-word expressions (2013) 0.01
    0.0051443353 = product of:
      0.010288671 = sum of:
        0.010288671 = product of:
          0.030866012 = sum of:
            0.030866012 = weight(_text_:h in 2868) [ClassicSimilarity], result of:
              0.030866012 = score(doc=2868,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.27449545 = fieldWeight in 2868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2868)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Folien einer Präsentation anlässlich COST Action IC1207 PARSEME Meeting, Warsaw, September 16, 2013. Vgl. den Beitrag: Snajder, J., P. Almic: Modeling semantic compositionality of Croatian multiword expressions. In: Informatica. 39(2015) H.3, S.301-309.
  14. Engerer, V.: Informationswissenschaft und Linguistik. : kurze Geschichte eines fruchtbaren interdisziplinäaren Verhäaltnisses in drei Akten (2012) 0.01
    0.0051443353 = product of:
      0.010288671 = sum of:
        0.010288671 = product of:
          0.030866012 = sum of:
            0.030866012 = weight(_text_:h in 3376) [ClassicSimilarity], result of:
              0.030866012 = score(doc=3376,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.27449545 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3376)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    SDV - Sprache und Datenverarbeitung. International journal for language data processing. 36(2012) H.2, S.71-91 [= E-Books - Fakten, Perspektiven und Szenarien] 36/2 (2012), S. 71-91
  15. Fóris, A.: Network theory and terminology (2013) 0.01
    0.005110096 = product of:
      0.010220192 = sum of:
        0.010220192 = product of:
          0.030660577 = sum of:
            0.030660577 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.030660577 = score(doc=1365,freq=2.0), product of:
                0.15849307 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045260075 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 21:22:48
  16. Malo, P.; Sinha, A.; Korhonen, P.; Wallenius, J.; Takala, P.: Good debt or bad debt : detecting semantic orientations in economic texts (2014) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 1226) [ClassicSimilarity], result of:
              0.029749434 = score(doc=1226,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 1226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1226)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The use of robo-readers to analyze news texts is an emerging technology trend in computational finance. Recent research has developed sophisticated financial polarity lexicons for investigating how financial sentiments relate to future company performance. However, based on experience from fields that commonly analyze sentiment, it is well known that the overall semantic orientation of a sentence may differ from that of individual words. This article investigates how semantic orientations can be better detected in financial and economic news by accommodating the overall phrase-structure information and domain-specific use of language. Our three main contributions are the following: (a) a human-annotated finance phrase bank that can be used for training and evaluating alternative models; (b) a technique to enhance financial lexicons with attributes that help to identify expected direction of events that affect sentiment; and (c) a linearized phrase-structure model for detecting contextual semantic orientations in economic texts. The relevance of the newly added lexicon features and the benefit of using the proposed learning algorithm are demonstrated in a comparative study against general sentiment models as well as the popular word frequency models used in recent financial studies. The proposed framework is parsimonious and avoids the explosion in feature space caused by the use of conventional n-gram features.
  17. Lian, T.; Yu, C.; Wang, W.; Yuan, Q.; Hou, Z.: Doctoral dissertations on tourism in China : a co-word analysis (2016) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 3178) [ClassicSimilarity], result of:
              0.029749434 = score(doc=3178,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 3178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3178)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Doval, Y.; Gómez-Rodríguez, C.: Comparing neural- and N-gram-based language models for word segmentation (2019) 0.00
    0.0049582394 = product of:
      0.009916479 = sum of:
        0.009916479 = product of:
          0.029749434 = sum of:
            0.029749434 = weight(_text_:c in 4675) [ClassicSimilarity], result of:
              0.029749434 = score(doc=4675,freq=2.0), product of:
                0.15612034 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.045260075 = queryNorm
                0.1905545 = fieldWeight in 4675, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4675)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Korman, D.Z.; Mack, E.; Jett, J.; Renear, A.H.: Defining textual entailment (2018) 0.00
    0.004365113 = product of:
      0.008730226 = sum of:
        0.008730226 = product of:
          0.026190678 = sum of:
            0.026190678 = weight(_text_:h in 4284) [ClassicSimilarity], result of:
              0.026190678 = score(doc=4284,freq=4.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.2329171 = fieldWeight in 4284, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Textual entailment is a relationship that obtains between fragments of text when one fragment in some sense implies the other fragment. The automation of textual entailment recognition supports a wide variety of text-based tasks, including information retrieval, information extraction, question answering, text summarization, and machine translation. Much ingenuity has been devoted to developing algorithms for identifying textual entailments, but relatively little to saying what textual entailment actually is. This article is a review of the logical and philosophical issues involved in providing an adequate definition of textual entailment. We show that many natural definitions of textual entailment are refuted by counterexamples, including the most widely cited definition of Dagan et al. We then articulate and defend the following revised definition: T textually entails H?=?df typically, a human reading T would be justified in inferring the proposition expressed by H from the proposition expressed by T. We also show that textual entailment is context-sensitive, nontransitive, and nonmonotonic.
  20. Heid, U.: Computerlinguistik zwischen Informationswissenschaft und multilingualer Kommunikation (2010) 0.00
    0.0041154684 = product of:
      0.008230937 = sum of:
        0.008230937 = product of:
          0.02469281 = sum of:
            0.02469281 = weight(_text_:h in 4018) [ClassicSimilarity], result of:
              0.02469281 = score(doc=4018,freq=2.0), product of:
                0.11244635 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045260075 = queryNorm
                0.21959636 = fieldWeight in 4018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4018)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Information - Wissenschaft und Praxis. 61(2010) H.6/7, S.361-366

Languages

  • e 28
  • d 14

Types

  • a 31
  • el 6
  • m 3
  • x 3
  • s 1
  • More… Less…