Search (17 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[2010 TO 2020}
  1. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.00
    0.0041475925 = product of:
      0.0103689805 = sum of:
        0.0065671084 = product of:
          0.019701324 = sum of:
            0.019701324 = weight(_text_:f in 4217) [ClassicSimilarity], result of:
              0.019701324 = score(doc=4217,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.17614852 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.33333334 = coord(1/3)
        0.0038018718 = product of:
          0.015207487 = sum of:
            0.015207487 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.015207487 = score(doc=4217,freq=2.0), product of:
                0.09826468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028060954 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.25 = coord(1/4)
      0.4 = coord(2/5)
    
    Date
    22. 1.2018 11:32:44
  2. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.00
    0.0032835545 = product of:
      0.016417772 = sum of:
        0.016417772 = product of:
          0.04925331 = sum of:
            0.04925331 = weight(_text_:f in 2027) [ClassicSimilarity], result of:
              0.04925331 = score(doc=2027,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.4403713 = fieldWeight in 2027, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2027)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    The Semantic Web: latest advances and new domains. 12th European Semantic Web Conference, ESWC 2015 Portoroz, Slovenia, May 31 -- June 4, 2015. Proceedings. Eds.: F. Gandon u.a
  3. Kocijan, K.: Visualizing natural language resources (2015) 0.00
    0.0032835545 = product of:
      0.016417772 = sum of:
        0.016417772 = product of:
          0.04925331 = sum of:
            0.04925331 = weight(_text_:f in 2995) [ClassicSimilarity], result of:
              0.04925331 = score(doc=2995,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.4403713 = fieldWeight in 2995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2995)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  4. Cruz Díaz, N.P.; Maña López, M.J.; Mata Vázquez, J.; Pachón Álvarez, V.: ¬A machine-learning approach to negation and speculation detection in clinical texts (2012) 0.00
    0.0028436414 = product of:
      0.0142182065 = sum of:
        0.0142182065 = product of:
          0.04265462 = sum of:
            0.04265462 = weight(_text_:f in 283) [ClassicSimilarity], result of:
              0.04265462 = score(doc=283,freq=6.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.38137275 = fieldWeight in 283, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=283)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    Detecting negative and speculative information is essential in most biomedical text-mining tasks where these language forms are used to express impressions, hypotheses, or explanations of experimental results. Our research is focused on developing a system based on machine-learning techniques that identifies negation and speculation signals and their scope in clinical texts. The proposed system works in two consecutive phases: first, a classifier decides whether each token in a sentence is a negation/speculation signal or not. Then another classifier determines, at sentence level, the tokens which are affected by the signals previously identified. The system was trained and evaluated on the clinical texts of the BioScope corpus, a freely available resource consisting of medical and biological texts: full-length articles, scientific abstracts, and clinical reports. The results obtained by our system were compared with those of two different systems, one based on regular expressions and the other based on machine learning. Our system's results outperformed the results obtained by these two systems. In the signal detection task, the F-score value was 97.3% in negation and 94.9% in speculation. In the scope-finding task, a token was correctly classified if it had been properly identified as being inside or outside the scope of all the negation signals present in the sentence. Our proposal showed an F score of 93.2% in negation and 80.9% in speculation. Additionally, the percentage of correct scopes (those with all their tokens correctly classified) was evaluated obtaining F scores of 90.9% in negation and 71.9% in speculation.
  5. Szpakowicz, S.; Bond, F.; Nakov, P.; Kim, S.N.: On the semantics of noun compounds (2013) 0.00
    0.0022984878 = product of:
      0.011492439 = sum of:
        0.011492439 = product of:
          0.034477316 = sum of:
            0.034477316 = weight(_text_:f in 120) [ClassicSimilarity], result of:
              0.034477316 = score(doc=120,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.3082599 = fieldWeight in 120, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=120)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  6. Colace, F.; Santo, M. De; Greco, L.; Napoletano, P.: Weighted word pairs for query expansion (2015) 0.00
    0.0022984878 = product of:
      0.011492439 = sum of:
        0.011492439 = product of:
          0.034477316 = sum of:
            0.034477316 = weight(_text_:f in 2687) [ClassicSimilarity], result of:
              0.034477316 = score(doc=2687,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.3082599 = fieldWeight in 2687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2687)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  7. Rötzer, F.: Kann KI mit KI generierte Texte erkennen? (2019) 0.00
    0.0022984878 = product of:
      0.011492439 = sum of:
        0.011492439 = product of:
          0.034477316 = sum of:
            0.034477316 = weight(_text_:f in 3977) [ClassicSimilarity], result of:
              0.034477316 = score(doc=3977,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.3082599 = fieldWeight in 3977, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3977)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  8. Sünkler, S.; Kerkmann, F.; Schultheiß, S.: Ok Google . the end of search as we know it : sprachgesteuerte Websuche im Test (2018) 0.00
    0.0022984878 = product of:
      0.011492439 = sum of:
        0.011492439 = product of:
          0.034477316 = sum of:
            0.034477316 = weight(_text_:f in 5626) [ClassicSimilarity], result of:
              0.034477316 = score(doc=5626,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.3082599 = fieldWeight in 5626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5626)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  9. Vasalou, A.; Gill, A.J.; Mazanderani, F.; Papoutsi, C.; Joinson, A.: Privacy dictionary : a new resource for the automated content analysis of privacy (2011) 0.00
    0.0019701326 = product of:
      0.009850662 = sum of:
        0.009850662 = product of:
          0.029551985 = sum of:
            0.029551985 = weight(_text_:f in 4915) [ClassicSimilarity], result of:
              0.029551985 = score(doc=4915,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.26422277 = fieldWeight in 4915, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4915)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  10. Ghazzawi, N.; Robichaud, B.; Drouin, P.; Sadat, F.: Automatic extraction of specialized verbal units (2018) 0.00
    0.0019701326 = product of:
      0.009850662 = sum of:
        0.009850662 = product of:
          0.029551985 = sum of:
            0.029551985 = weight(_text_:f in 4094) [ClassicSimilarity], result of:
              0.029551985 = score(doc=4094,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.26422277 = fieldWeight in 4094, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4094)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  11. Collovini de Abreu, S.; Vieira, R.: RelP: Portuguese open relation extraction (2017) 0.00
    0.0016417772 = product of:
      0.008208886 = sum of:
        0.008208886 = product of:
          0.024626656 = sum of:
            0.024626656 = weight(_text_:f in 3621) [ClassicSimilarity], result of:
              0.024626656 = score(doc=3621,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.22018565 = fieldWeight in 3621, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3621)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    Natural language texts are valuable data sources in many human activities. NLP techniques are being widely used in order to help find the right information to specific needs. In this paper, we present one such technique: relation extraction from texts. This task aims at identifying and classifying semantic relations that occur between entities in a text. For example, the sentence "Roberto Marinho is the founder of Rede Globo" expresses a relation occurring between "Roberto Marinho" and "Rede Globo." This work presents a system for Portuguese Open Relation Extraction, named RelP, which extracts any relation descriptor that describes an explicit relation between named entities in the organisation domain by applying the Conditional Random Fields. For implementing RelP, we define the representation scheme, features based on previous work, and a reference corpus. RelP achieved state of the art results for open relation extraction; the F-measure rate was around 60% between the named entities person, organisation and place. For better understanding of the output, we present a way for organizing the output from the mining of the extracted relation descriptors. This organization can be useful to classify relation types, to cluster the entities involved in a common relation and to populate datasets.
  12. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.00
    0.0015207487 = product of:
      0.0076037436 = sum of:
        0.0076037436 = product of:
          0.030414974 = sum of:
            0.030414974 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.030414974 = score(doc=1490,freq=2.0), product of:
                0.09826468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028060954 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.25 = coord(1/4)
      0.2 = coord(1/5)
    
    Date
    22. 3.2015 9:30:24
  13. Belbachir, F.; Boughanem, M.: Using language models to improve opinion detection (2018) 0.00
    0.0013134218 = product of:
      0.0065671084 = sum of:
        0.0065671084 = product of:
          0.019701324 = sum of:
            0.019701324 = weight(_text_:f in 5044) [ClassicSimilarity], result of:
              0.019701324 = score(doc=5044,freq=2.0), product of:
                0.11184496 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.028060954 = queryNorm
                0.17614852 = fieldWeight in 5044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5044)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  14. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.00
    0.0011405615 = product of:
      0.0057028076 = sum of:
        0.0057028076 = product of:
          0.02281123 = sum of:
            0.02281123 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.02281123 = score(doc=563,freq=2.0), product of:
                0.09826468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028060954 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.25 = coord(1/4)
      0.2 = coord(1/5)
    
    Date
    10. 1.2013 19:22:47
  15. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.00
    0.0011405615 = product of:
      0.0057028076 = sum of:
        0.0057028076 = product of:
          0.02281123 = sum of:
            0.02281123 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.02281123 = score(doc=1848,freq=2.0), product of:
                0.09826468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028060954 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.25 = coord(1/4)
      0.2 = coord(1/5)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  16. Fóris, A.: Network theory and terminology (2013) 0.00
    9.5046795E-4 = product of:
      0.00475234 = sum of:
        0.00475234 = product of:
          0.01900936 = sum of:
            0.01900936 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
              0.01900936 = score(doc=1365,freq=2.0), product of:
                0.09826468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028060954 = queryNorm
                0.19345059 = fieldWeight in 1365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1365)
          0.25 = coord(1/4)
      0.2 = coord(1/5)
    
    Date
    2. 9.2014 21:22:48
  17. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.00
    6.653276E-4 = product of:
      0.003326638 = sum of:
        0.003326638 = product of:
          0.013306552 = sum of:
            0.013306552 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
              0.013306552 = score(doc=3807,freq=2.0), product of:
                0.09826468 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028060954 = queryNorm
                0.1354154 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3807)
          0.25 = coord(1/4)
      0.2 = coord(1/5)
    
    Date
    20. 1.2015 18:30:22