Search (199 results, page 1 of 10)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09784776 = sum of:
      0.05348208 = product of:
        0.21392833 = sum of:
          0.21392833 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21392833 = score(doc=562,freq=2.0), product of:
              0.38064316 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.044897694 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.04436568 = product of:
        0.06654852 = sum of:
          0.030050414 = weight(_text_:j in 562) [ClassicSimilarity], result of:
            0.030050414 = score(doc=562,freq=2.0), product of:
              0.14266226 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.044897694 = queryNorm
              0.21064025 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
          0.036498103 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.036498103 = score(doc=562,freq=2.0), product of:
              0.15722407 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044897694 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.06
    0.06461278 = product of:
      0.12922557 = sum of:
        0.12922557 = product of:
          0.19383834 = sum of:
            0.060100827 = weight(_text_:j in 733) [ClassicSimilarity], result of:
              0.060100827 = score(doc=733,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.4212805 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
            0.13373752 = weight(_text_:f in 733) [ClassicSimilarity], result of:
              0.13373752 = score(doc=733,freq=4.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.74733484 = fieldWeight in 733, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  3. Gonzalo, J.; Verdejo, F.; Peters, C.; Calzolari, N.: Applying EuroWordNet to cross-language text retrieval (1998) 0.06
    0.06014849 = product of:
      0.12029698 = sum of:
        0.12029698 = product of:
          0.18044546 = sum of:
            0.07011763 = weight(_text_:j in 6445) [ClassicSimilarity], result of:
              0.07011763 = score(doc=6445,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.4914939 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
            0.110327825 = weight(_text_:f in 6445) [ClassicSimilarity], result of:
              0.110327825 = score(doc=6445,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.6165198 = fieldWeight in 6445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6445)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  4. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.05
    0.04742995 = product of:
      0.0948599 = sum of:
        0.0948599 = sum of:
          0.02504201 = weight(_text_:j in 1171) [ClassicSimilarity], result of:
            0.02504201 = score(doc=1171,freq=2.0), product of:
              0.14266226 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.044897694 = queryNorm
              0.17553353 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.039402798 = weight(_text_:f in 1171) [ClassicSimilarity], result of:
            0.039402798 = score(doc=1171,freq=2.0), product of:
              0.1789526 = queryWeight, product of:
                3.985786 = idf(docFreq=2232, maxDocs=44218)
                0.044897694 = queryNorm
              0.22018565 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.985786 = idf(docFreq=2232, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
          0.03041509 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
            0.03041509 = score(doc=1171,freq=2.0), product of:
              0.15722407 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044897694 = queryNorm
              0.19345059 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
      0.5 = coord(1/2)
    
    Date
    23.11.2023 19:07:22
  5. Chibout, K.; Vilnat, A.: Primitive sémantiques, classification des verbes et polysémie (1999) 0.04
    0.042963207 = product of:
      0.08592641 = sum of:
        0.08592641 = product of:
          0.12888962 = sum of:
            0.05008402 = weight(_text_:j in 6229) [ClassicSimilarity], result of:
              0.05008402 = score(doc=6229,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.35106707 = fieldWeight in 6229, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6229)
            0.078805596 = weight(_text_:f in 6229) [ClassicSimilarity], result of:
              0.078805596 = score(doc=6229,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.4403713 = fieldWeight in 6229, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6229)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Language
    f
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
  6. Rieger, F.: Lügende Computer (2023) 0.04
    0.037236206 = product of:
      0.07447241 = sum of:
        0.07447241 = product of:
          0.11170861 = sum of:
            0.06304447 = weight(_text_:f in 912) [ClassicSimilarity], result of:
              0.06304447 = score(doc=912,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.35229704 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
            0.04866414 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.04866414 = score(doc=912,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    16. 3.2023 19:22:55
  7. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.04
    0.0369714 = product of:
      0.0739428 = sum of:
        0.0739428 = product of:
          0.1109142 = sum of:
            0.05008402 = weight(_text_:j in 1463) [ClassicSimilarity], result of:
              0.05008402 = score(doc=1463,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.35106707 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
            0.06083018 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06083018 = score(doc=1463,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  8. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.04
    0.036406897 = product of:
      0.072813794 = sum of:
        0.072813794 = product of:
          0.10922068 = sum of:
            0.078805596 = weight(_text_:f in 190) [ClassicSimilarity], result of:
              0.078805596 = score(doc=190,freq=8.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.4403713 = fieldWeight in 190, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
            0.03041509 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.03041509 = score(doc=190,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Classification
    Spr F 510
    Spr F 87 / Lexikographie
    Date
    14. 4.2007 10:04:22
    SBB
    Spr F 510
    Spr F 87 / Lexikographie
  9. Vazov, N.: Identification des differentes structures temporelles dans des textes et leur rôles dans le raisonnement temporel (1999) 0.03
    0.034370564 = product of:
      0.06874113 = sum of:
        0.06874113 = product of:
          0.10311169 = sum of:
            0.04006722 = weight(_text_:j in 6203) [ClassicSimilarity], result of:
              0.04006722 = score(doc=6203,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.28085366 = fieldWeight in 6203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6203)
            0.06304447 = weight(_text_:f in 6203) [ClassicSimilarity], result of:
              0.06304447 = score(doc=6203,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.35229704 = fieldWeight in 6203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6203)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Language
    f
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
  10. Ferret, O.; Grau, B.; Masson, N.: Utilisation d'un réseau de cooccurences lexikales pour a méliorer une analyse thématique fondée sur la distribution des mots (1999) 0.03
    0.034370564 = product of:
      0.06874113 = sum of:
        0.06874113 = product of:
          0.10311169 = sum of:
            0.04006722 = weight(_text_:j in 6295) [ClassicSimilarity], result of:
              0.04006722 = score(doc=6295,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.28085366 = fieldWeight in 6295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6295)
            0.06304447 = weight(_text_:f in 6295) [ClassicSimilarity], result of:
              0.06304447 = score(doc=6295,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.35229704 = fieldWeight in 6295, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6295)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Language
    f
    Source
    Organisation des connaissances en vue de leur intégration dans les systèmes de représentation et de recherche d'information. Ed.: J. Maniez, et al
  11. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.03
    0.03258168 = product of:
      0.06516336 = sum of:
        0.06516336 = product of:
          0.09774503 = sum of:
            0.055163912 = weight(_text_:f in 156) [ClassicSimilarity], result of:
              0.055163912 = score(doc=156,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.3082599 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
            0.042581122 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.042581122 = score(doc=156,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  12. Cruz Díaz, N.P.; Maña López, M.J.; Mata Vázquez, J.; Pachón Álvarez, V.: ¬A machine-learning approach to negation and speculation detection in clinical texts (2012) 0.03
    0.031096553 = product of:
      0.062193107 = sum of:
        0.062193107 = product of:
          0.09328966 = sum of:
            0.02504201 = weight(_text_:j in 283) [ClassicSimilarity], result of:
              0.02504201 = score(doc=283,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.17553353 = fieldWeight in 283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=283)
            0.068247646 = weight(_text_:f in 283) [ClassicSimilarity], result of:
              0.068247646 = score(doc=283,freq=6.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.38137275 = fieldWeight in 283, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=283)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Detecting negative and speculative information is essential in most biomedical text-mining tasks where these language forms are used to express impressions, hypotheses, or explanations of experimental results. Our research is focused on developing a system based on machine-learning techniques that identifies negation and speculation signals and their scope in clinical texts. The proposed system works in two consecutive phases: first, a classifier decides whether each token in a sentence is a negation/speculation signal or not. Then another classifier determines, at sentence level, the tokens which are affected by the signals previously identified. The system was trained and evaluated on the clinical texts of the BioScope corpus, a freely available resource consisting of medical and biological texts: full-length articles, scientific abstracts, and clinical reports. The results obtained by our system were compared with those of two different systems, one based on regular expressions and the other based on machine learning. Our system's results outperformed the results obtained by these two systems. In the signal detection task, the F-score value was 97.3% in negation and 94.9% in speculation. In the scope-finding task, a token was correctly classified if it had been properly identified as being inside or outside the scope of all the negation signals present in the sentence. Our proposal showed an F score of 93.2% in negation and 80.9% in speculation. Additionally, the percentage of correct scopes (those with all their tokens correctly classified) was evaluated obtaining F scores of 90.9% in negation and 71.9% in speculation.
  13. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.03
    0.030720592 = product of:
      0.061441183 = sum of:
        0.061441183 = product of:
          0.092161775 = sum of:
            0.049580652 = weight(_text_:j in 1361) [ClassicSimilarity], result of:
              0.049580652 = score(doc=1361,freq=4.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.34753868 = fieldWeight in 1361, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1361)
            0.042581122 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
              0.042581122 = score(doc=1361,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.2708308 = fieldWeight in 1361, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1361)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    6. 1.1999 10:22:07
    Source
    Wissensorganisation im Wandel: Dezimalklassifikation - Thesaurusfragen - Warenklassifikation. Proc. 11. Jahrestagung der Gesellschaft für Klassifikation, Aachen, 29.6.-1.7.1987. Hrsg.: H.-J. Hermes u. J. Hölzl
  14. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.03
    0.029577121 = product of:
      0.059154242 = sum of:
        0.059154242 = product of:
          0.08873136 = sum of:
            0.04006722 = weight(_text_:j in 835) [ClassicSimilarity], result of:
              0.04006722 = score(doc=835,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.28085366 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
            0.04866414 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.04866414 = score(doc=835,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
  15. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.02674104 = product of:
      0.05348208 = sum of:
        0.05348208 = product of:
          0.21392833 = sum of:
            0.21392833 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.21392833 = score(doc=862,freq=2.0), product of:
                0.38064316 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044897694 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  16. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.03
    0.02587998 = product of:
      0.05175996 = sum of:
        0.05175996 = product of:
          0.07763994 = sum of:
            0.035058815 = weight(_text_:j in 4729) [ClassicSimilarity], result of:
              0.035058815 = score(doc=4729,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.24574696 = fieldWeight in 4729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4729)
            0.042581122 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
              0.042581122 = score(doc=4729,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.2708308 = fieldWeight in 4729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4729)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  17. Natural language processing and speech technology : Results of the 3rd KONVENS Conference, Bielefeld, October 1996 (1996) 0.03
    0.025777921 = product of:
      0.051555842 = sum of:
        0.051555842 = product of:
          0.07733376 = sum of:
            0.030050414 = weight(_text_:j in 7291) [ClassicSimilarity], result of:
              0.030050414 = score(doc=7291,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.21064025 = fieldWeight in 7291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7291)
            0.04728335 = weight(_text_:f in 7291) [ClassicSimilarity], result of:
              0.04728335 = score(doc=7291,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.26422277 = fieldWeight in 7291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7291)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Enthält u.a. die Beiträge: HILDEBRANDT, B. u.a.: Kognitive Modellierung von Sprach- und Bildverstehen; KELLER, F.: How do humans deal with ungrammatical input? Experimental evidence and computational modelling; MARX, J:: Die 'Computer-Talk-These' in der Sprachgenerierung: Hinweise zur Gestaltung natürlichsprachlicher Zustandsanzeigen in multimodalen Informationssystemen; SCHULTZ, T. u. H. SOLTAU: Automatische Identifizierung spontan gesprochener Sprachen mit neuronalen Netzen; WAUSCHKUHN, O.: Ein Werkzeug zur partiellen syntaktischen Analyse deutscher Textkorpora; LEZIUS, W., R. RAPP u. M. WETTLER: A morphology-system and part-of-speech tagger for German; KONRAD, K. u.a.: CLEARS: ein Werkzeug für Ausbildung und Forschung in der Computerlinguistik
  18. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.03
    0.025777921 = product of:
      0.051555842 = sum of:
        0.051555842 = product of:
          0.07733376 = sum of:
            0.030050414 = weight(_text_:j in 3296) [ClassicSimilarity], result of:
              0.030050414 = score(doc=3296,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.21064025 = fieldWeight in 3296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
            0.04728335 = weight(_text_:f in 3296) [ClassicSimilarity], result of:
              0.04728335 = score(doc=3296,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.26422277 = fieldWeight in 3296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
  19. Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia : Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000 Fachhochschule Köln (2000) 0.02
    0.024939185 = product of:
      0.04987837 = sum of:
        0.04987837 = product of:
          0.07481755 = sum of:
            0.03541475 = weight(_text_:j in 5527) [ClassicSimilarity], result of:
              0.03541475 = score(doc=5527,freq=4.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.2482419 = fieldWeight in 5527, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5527)
            0.039402798 = weight(_text_:f in 5527) [ClassicSimilarity], result of:
              0.039402798 = score(doc=5527,freq=2.0), product of:
                0.1789526 = queryWeight, product of:
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.044897694 = queryNorm
                0.22018565 = fieldWeight in 5527, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.985786 = idf(docFreq=2232, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5527)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: WRIGHT, S.E.: Leveraging terminology resources across application boundaries: accessing resources in future integrated environments; PALME, K.: E-Commerce: Verhindert Sprache Business-to-business?; RÜEGGER, R.: Die qualität der virtuellen Information als Wettbewerbsvorteil: Information im Internet ist Sprache - noch; SCHIRMER, K. u. J. HALLER: Zugang zu mehrsprachigen Nachrichten im Internet; WEISS, A. u. W. WIEDEN: Die Herstellung mehrsprachiger Informations- und Wissensressourcen in Unternehmen; FULFORD, H.: Monolingual or multilingual web sites? An exploratory study of UK SMEs; SCHMIDTKE-NIKELLA, M.: Effiziente Hypermediaentwicklung: Die Autorenentlastung durch eine Engine; SCHMIDT, R.: Maschinelle Text-Ton-Synchronisation in Wissenschaft und Wirtschaft; HELBIG, H. u.a.: Natürlichsprachlicher Zugang zu Informationsanbietern im Internet und zu lokalen Datenbanken; SIENEL, J. u.a.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts; ERBACH, G.: Sprachdialogsysteme für Telefondienste: Stand der Technik und zukünftige Entwicklungen; SUSEN, A.: Spracherkennung: Akteulle Einsatzmöglichkeiten im Bereich der Telekommunikation; BENZMÜLLER, R.: Logox WebSpeech: die neue Technologie für sprechende Internetseiten; JAARANEN, K. u.a.: Webtran tools for in-company language support; SCHMITZ, K.-D.: Projektforschung und Infrastrukturen im Bereich der Terminologie: Wie kann die Wirtschaft davon profitieren?; SCHRÖTER, F. u. U. MEYER: Entwicklung sprachlicher Handlungskompetenz in englisch mit hilfe eines Multimedia-Sprachlernsystems; KLEIN, A.: Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet; HAUER, M.: Knowledge Management braucht Terminologie Management; HEYER, G. u.a.: Texttechnologische Anwendungen am Beispiel Text Mining
  20. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.02
    0.02218284 = product of:
      0.04436568 = sum of:
        0.04436568 = product of:
          0.06654852 = sum of:
            0.030050414 = weight(_text_:j in 1848) [ClassicSimilarity], result of:
              0.030050414 = score(doc=1848,freq=2.0), product of:
                0.14266226 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.044897694 = queryNorm
                0.21064025 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
            0.036498103 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.036498103 = score(doc=1848,freq=2.0), product of:
                0.15722407 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044897694 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.

Years

Languages

  • e 135
  • d 51
  • f 6
  • m 4
  • slv 1
  • More… Less…

Types

  • a 158
  • m 21
  • el 18
  • s 15
  • x 4
  • d 2
  • p 2
  • b 1
  • More… Less…

Classifications