Search (54 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.09759211 = sum of:
      0.07770607 = product of:
        0.2331182 = sum of:
          0.2331182 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2331182 = score(doc=562,freq=2.0), product of:
              0.41478777 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.048925128 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.019886037 = product of:
        0.039772075 = sum of:
          0.039772075 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.039772075 = score(doc=562,freq=2.0), product of:
              0.17132746 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.048925128 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.10
    0.097291134 = product of:
      0.19458227 = sum of:
        0.19458227 = product of:
          0.38916454 = sum of:
            0.38916454 = weight(_text_:autonomy in 6386) [ClassicSimilarity], result of:
              0.38916454 = score(doc=6386,freq=8.0), product of:
                0.37895662 = queryWeight, product of:
                  7.7456436 = idf(docFreq=51, maxDocs=44218)
                  0.048925128 = queryNorm
                1.0269369 = fieldWeight in 6386, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  7.7456436 = idf(docFreq=51, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Object
    Autonomy
  3. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.08
    0.081075944 = product of:
      0.16215189 = sum of:
        0.16215189 = product of:
          0.32430378 = sum of:
            0.32430378 = weight(_text_:autonomy in 5863) [ClassicSimilarity], result of:
              0.32430378 = score(doc=5863,freq=8.0), product of:
                0.37895662 = queryWeight, product of:
                  7.7456436 = idf(docFreq=51, maxDocs=44218)
                  0.048925128 = queryNorm
                0.8557808 = fieldWeight in 5863, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  7.7456436 = idf(docFreq=51, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5863)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Retrievaltests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das aufgrund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Object
    Autonomy
  4. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.038853034 = product of:
      0.07770607 = sum of:
        0.07770607 = product of:
          0.2331182 = sum of:
            0.2331182 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.2331182 = score(doc=862,freq=2.0), product of:
                0.41478777 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.048925128 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  5. Warner, A.J.: Natural language processing (1987) 0.03
    0.026514716 = product of:
      0.053029433 = sum of:
        0.053029433 = product of:
          0.106058866 = sum of:
            0.106058866 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.106058866 = score(doc=337,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  6. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.023200378 = product of:
      0.046400756 = sum of:
        0.046400756 = product of:
          0.09280151 = sum of:
            0.09280151 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.09280151 = score(doc=3164,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  7. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.023200378 = product of:
      0.046400756 = sum of:
        0.046400756 = product of:
          0.09280151 = sum of:
            0.09280151 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09280151 = score(doc=4506,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  8. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.023200378 = product of:
      0.046400756 = sum of:
        0.046400756 = product of:
          0.09280151 = sum of:
            0.09280151 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.09280151 = score(doc=6672,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  9. New tools for human translators (1997) 0.02
    0.023200378 = product of:
      0.046400756 = sum of:
        0.046400756 = product of:
          0.09280151 = sum of:
            0.09280151 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09280151 = score(doc=1179,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  10. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.023200378 = product of:
      0.046400756 = sum of:
        0.046400756 = product of:
          0.09280151 = sum of:
            0.09280151 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09280151 = score(doc=3117,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  11. ¬Der Student aus dem Computer (2023) 0.02
    0.023200378 = product of:
      0.046400756 = sum of:
        0.046400756 = product of:
          0.09280151 = sum of:
            0.09280151 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.09280151 = score(doc=1079,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  12. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.019886037 = product of:
      0.039772075 = sum of:
        0.039772075 = product of:
          0.07954415 = sum of:
            0.07954415 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07954415 = score(doc=4483,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  13. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.019886037 = product of:
      0.039772075 = sum of:
        0.039772075 = product of:
          0.07954415 = sum of:
            0.07954415 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07954415 = score(doc=4888,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  14. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.019886037 = product of:
      0.039772075 = sum of:
        0.039772075 = product of:
          0.07954415 = sum of:
            0.07954415 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07954415 = score(doc=5429,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  15. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.016571699 = product of:
      0.033143397 = sum of:
        0.033143397 = product of:
          0.066286795 = sum of:
            0.066286795 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.066286795 = score(doc=1463,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  16. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.016571699 = product of:
      0.033143397 = sum of:
        0.033143397 = product of:
          0.066286795 = sum of:
            0.066286795 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.066286795 = score(doc=5428,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  17. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.02
    0.016571699 = product of:
      0.033143397 = sum of:
        0.033143397 = product of:
          0.066286795 = sum of:
            0.066286795 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.066286795 = score(doc=1693,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
  18. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.01
    0.013257358 = product of:
      0.026514716 = sum of:
        0.026514716 = product of:
          0.053029433 = sum of:
            0.053029433 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.053029433 = score(doc=8521,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  19. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.01
    0.013257358 = product of:
      0.026514716 = sum of:
        0.026514716 = product of:
          0.053029433 = sum of:
            0.053029433 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.053029433 = score(doc=6752,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
  20. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.01
    0.013257358 = product of:
      0.026514716 = sum of:
        0.026514716 = product of:
          0.053029433 = sum of:
            0.053029433 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.053029433 = score(doc=6753,freq=2.0), product of:
                0.17132746 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.048925128 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15

Years

Languages

  • e 36
  • d 18

Types

  • a 42
  • el 5
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…