Search (738 results, page 1 of 37)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07925022 = sum of:
      0.054461535 = product of:
        0.21784614 = sum of:
          0.21784614 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.21784614 = score(doc=562,freq=2.0), product of:
              0.38761413 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.045719936 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.006205424 = weight(_text_:s in 562) [ClassicSimilarity], result of:
        0.006205424 = score(doc=562,freq=6.0), product of:
          0.049708433 = queryWeight, product of:
            1.0872376 = idf(docFreq=40523, maxDocs=44218)
            0.045719936 = queryNorm
          0.124836445 = fieldWeight in 562, product of:
            2.4494898 = tf(freq=6.0), with freq of:
              6.0 = termFreq=6.0
            1.0872376 = idf(docFreq=40523, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.01858326 = product of:
        0.03716652 = sum of:
          0.03716652 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.03716652 = score(doc=562,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Pages
    S.331-334
  2. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.06
    0.060756348 = product of:
      0.09113452 = sum of:
        0.00417982 = weight(_text_:s in 4184) [ClassicSimilarity], result of:
          0.00417982 = score(doc=4184,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.08408674 = fieldWeight in 4184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4184)
        0.0869547 = sum of:
          0.043593757 = weight(_text_:von in 4184) [ClassicSimilarity], result of:
            0.043593757 = score(doc=4184,freq=6.0), product of:
              0.12197845 = queryWeight, product of:
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.045719936 = queryNorm
              0.357389 = fieldWeight in 4184, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
          0.04336094 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
            0.04336094 = score(doc=4184,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.2708308 = fieldWeight in 4184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4184)
      0.6666667 = coord(2/3)
    
    Abstract
    Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.
    Date
    22. 1.2011 10:38:28
    Imprint
    Köln : Herbert von Halem Verlag
    Pages
    S.112-128
  3. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.05
    0.048495002 = product of:
      0.0727425 = sum of:
        0.0050667073 = weight(_text_:s in 1746) [ClassicSimilarity], result of:
          0.0050667073 = score(doc=1746,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.101928525 = fieldWeight in 1746, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=1746)
        0.06767579 = sum of:
          0.030509273 = weight(_text_:von in 1746) [ClassicSimilarity], result of:
            0.030509273 = score(doc=1746,freq=4.0), product of:
              0.12197845 = queryWeight, product of:
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.045719936 = queryNorm
              0.2501202 = fieldWeight in 1746, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.046875 = fieldNorm(doc=1746)
          0.03716652 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
            0.03716652 = score(doc=1746,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.23214069 = fieldWeight in 1746, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1746)
      0.6666667 = coord(2/3)
    
    Abstract
    Im Rahmen dieser Arbeit wird eine Vorgehensweise entwickelt, die die Fixierung auf das Wort und die damit verbundenen Schwächen überwindet. Sie gestattet die Extraktion von Informationen anhand der repräsentierten Begriffe und bildet damit die Basis einer inhaltlichen Texterschließung. Die anschließende prototypische Realisierung dient dazu, die Konzeption zu überprüfen sowie ihre Möglichkeiten und Grenzen abzuschätzen und zu bewerten. Arbeiten zum Information Extraction widmen sich fast ausschließlich dem Englischen, wobei insbesondere im Bereich der Named Entities sehr gute Ergebnisse erzielt werden. Deutlich schlechter sehen die Resultate für weniger regelmäßige Sprachen wie beispielsweise das Deutsche aus. Aus diesem Grund sowie praktischen Erwägungen wie insbesondere der Vertrautheit des Autors damit, soll diese Sprache primär Gegenstand der Untersuchungen sein. Die Lösung von einer engen Termorientierung bei gleichzeitiger Betonung der repräsentierten Begriffe legt nahe, dass nicht nur die verwendeten Worte sekundär werden sondern auch die verwendete Sprache. Um den Rahmen dieser Arbeit nicht zu sprengen wird bei der Untersuchung dieses Punktes das Augenmerk vor allem auf die mit unterschiedlichen Sprachen verbundenen Schwierigkeiten und Besonderheiten gelegt.
    Date
    22. 3.2015 9:17:30
    Pages
    XV, 147 S
  4. Warner, A.J.: Natural language processing (1987) 0.04
    0.039406158 = product of:
      0.059109237 = sum of:
        0.009553875 = weight(_text_:s in 337) [ClassicSimilarity], result of:
          0.009553875 = score(doc=337,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.19219826 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.04955536 = product of:
          0.09911072 = sum of:
            0.09911072 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09911072 = score(doc=337,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  5. New tools for human translators (1997) 0.04
    0.03678884 = product of:
      0.055183258 = sum of:
        0.011822318 = weight(_text_:s in 1179) [ClassicSimilarity], result of:
          0.011822318 = score(doc=1179,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.23783323 = fieldWeight in 1179, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=1179)
        0.04336094 = product of:
          0.08672188 = sum of:
            0.08672188 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08672188 = score(doc=1179,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 12(1997) nos.1/2, S.1-194
    Type
    s
  6. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.04
    0.0360807 = product of:
      0.054121047 = sum of:
        0.0051711868 = weight(_text_:s in 734) [ClassicSimilarity], result of:
          0.0051711868 = score(doc=734,freq=6.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.10403037 = fieldWeight in 734, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=734)
        0.04894986 = sum of:
          0.017977761 = weight(_text_:von in 734) [ClassicSimilarity], result of:
            0.017977761 = score(doc=734,freq=2.0), product of:
              0.12197845 = queryWeight, product of:
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.045719936 = queryNorm
              0.14738473 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.0309721 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.0309721 = score(doc=734,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.6666667 = coord(2/3)
    
    Date
    19. 7.2002 14:22:31
    Footnote
    Rez. in: Franfurter Rundschau Nr.43 vom 20.2.2001, S.23 (A. Barthelmy)
    Issue
    Aus dem Engl. übers. von Martina Wiese.
    Pages
    389 S
  7. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.03448039 = product of:
      0.051720582 = sum of:
        0.00835964 = weight(_text_:s in 3164) [ClassicSimilarity], result of:
          0.00835964 = score(doc=3164,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.16817348 = fieldWeight in 3164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=3164)
        0.04336094 = product of:
          0.08672188 = sum of:
            0.08672188 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08672188 = score(doc=3164,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.03448039 = product of:
      0.051720582 = sum of:
        0.00835964 = weight(_text_:s in 4506) [ClassicSimilarity], result of:
          0.00835964 = score(doc=4506,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.16817348 = fieldWeight in 4506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.04336094 = product of:
          0.08672188 = sum of:
            0.08672188 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08672188 = score(doc=4506,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
  9. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.03448039 = product of:
      0.051720582 = sum of:
        0.00835964 = weight(_text_:s in 6672) [ClassicSimilarity], result of:
          0.00835964 = score(doc=6672,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.16817348 = fieldWeight in 6672, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=6672)
        0.04336094 = product of:
          0.08672188 = sum of:
            0.08672188 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08672188 = score(doc=6672,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 14(1999) no.2, S.113-157
  10. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.03448039 = product of:
      0.051720582 = sum of:
        0.00835964 = weight(_text_:s in 3117) [ClassicSimilarity], result of:
          0.00835964 = score(doc=3117,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.16817348 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=3117)
        0.04336094 = product of:
          0.08672188 = sum of:
            0.08672188 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08672188 = score(doc=3117,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    28. 2.1999 10:48:22
    Source
    Computers and the humanities. 31(1997) no.4, S.281-291
  11. ¬Der Student aus dem Computer (2023) 0.03
    0.03448039 = product of:
      0.051720582 = sum of:
        0.00835964 = weight(_text_:s in 1079) [ClassicSimilarity], result of:
          0.00835964 = score(doc=1079,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.16817348 = fieldWeight in 1079, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=1079)
        0.04336094 = product of:
          0.08672188 = sum of:
            0.08672188 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08672188 = score(doc=1079,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    27. 1.2023 16:22:55
    Source
    Pirmasenser Zeitung. 2023 vom 27.01.2023, S.1
  12. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.03
    0.029554619 = product of:
      0.044331927 = sum of:
        0.007165406 = weight(_text_:s in 4483) [ClassicSimilarity], result of:
          0.007165406 = score(doc=4483,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.14414869 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.03716652 = product of:
          0.07433304 = sum of:
            0.07433304 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07433304 = score(doc=4483,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
  13. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.029554619 = product of:
      0.044331927 = sum of:
        0.007165406 = weight(_text_:s in 4888) [ClassicSimilarity], result of:
          0.007165406 = score(doc=4888,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.14414869 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.03716652 = product of:
          0.07433304 = sum of:
            0.07433304 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07433304 = score(doc=4888,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 3.2013 14:56:22
  14. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.03
    0.029554619 = product of:
      0.044331927 = sum of:
        0.007165406 = weight(_text_:s in 5429) [ClassicSimilarity], result of:
          0.007165406 = score(doc=5429,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.14414869 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.03716652 = product of:
          0.07433304 = sum of:
            0.07433304 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.07433304 = score(doc=5429,freq=2.0), product of:
                0.16010343 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045719936 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    c't. 2000, H.22, S.230-231
  15. Computerunterstützte Textanalyse (2001) 0.03
    0.029302528 = product of:
      0.04395379 = sum of:
        0.00835964 = weight(_text_:s in 5646) [ClassicSimilarity], result of:
          0.00835964 = score(doc=5646,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.16817348 = fieldWeight in 5646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=5646)
        0.03559415 = product of:
          0.0711883 = sum of:
            0.0711883 = weight(_text_:von in 5646) [ClassicSimilarity], result of:
              0.0711883 = score(doc=5646,freq=4.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.58361375 = fieldWeight in 5646, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5646)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Vorstellung von Capito, einem Tool zur Analyse der Verständlichkeit von Texten
    Source
    iX. 2001, H.6, S.40
  16. Natürlichsprachlicher Entwurf von Informationssystemen (1996) 0.03
    0.028183758 = product of:
      0.042275637 = sum of:
        0.013511219 = weight(_text_:s in 722) [ClassicSimilarity], result of:
          0.013511219 = score(doc=722,freq=4.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.2718094 = fieldWeight in 722, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=722)
        0.028764417 = product of:
          0.057528835 = sum of:
            0.057528835 = weight(_text_:von in 722) [ClassicSimilarity], result of:
              0.057528835 = score(doc=722,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.47163114 = fieldWeight in 722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.125 = fieldNorm(doc=722)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Pages
    296 S
    Type
    s
  17. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.03
    0.027711637 = product of:
      0.041567456 = sum of:
        0.00208991 = weight(_text_:s in 5759) [ClassicSimilarity], result of:
          0.00208991 = score(doc=5759,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.04204337 = fieldWeight in 5759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5759)
        0.039477546 = sum of:
          0.017797075 = weight(_text_:von in 5759) [ClassicSimilarity], result of:
            0.017797075 = score(doc=5759,freq=4.0), product of:
              0.12197845 = queryWeight, product of:
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.045719936 = queryNorm
              0.14590344 = fieldWeight in 5759, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5759)
          0.02168047 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
            0.02168047 = score(doc=5759,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.1354154 = fieldWeight in 5759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5759)
      0.6666667 = coord(2/3)
    
    Content
    Um Firmen und Agenturen die Beobachtungen von Medien zu erleichtern, entwickeln Forscher an der Duisburger Hochschule zurzeit ein System zur automatischen Themenerkennung in Rundfunk und Fernsehen. Das so genannte Alert-System soll dem Nutzer helfen, die für ihn relevanten Sprachinformationen aus Nachrichtensendungen herauszufiltem und weiterzuverarbeiten. Durch die automatische Analyse durch den Computer können mehrere Programme rund um die Uhr beobachtet werden. Noch erfolgt die Informationsgewinnung aus TV- und Radiosendungen auf klassischem Wege: Ein Mensch sieht, hört, liest und wertet aus. Das ist enorm zeitaufwendig und für eine Firma, die beispielsweise die Konkurrenz beobachten oder ihre Medienpräsenz dokumentieren lassen möchte, auch sehr teuer. Diese Arbeit ließe sich mit einem Spracherkenner automatisieren, sagten sich die Duisburger Forscher. Sie arbeiten nun zusammen mit Partnern aus Deutschland, Frankreich und Portugal in einem europaweiten Projekt an der Entwicklung einer entsprechenden Technologie (http://alert.uni-duisburg.de). An dem Projekt sind auch zwei Medienbeobachtungsuntemehmen beteiligt, die Oberserver Argus Media GmbH aus Baden-Baden und das französische Unternehmen Secodip. Unsere Arbeit würde schon dadurch erleichtert, wenn Informationen, die über unsere Kunden in den Medien erscheinen, vorselektiert würden", beschreibt Simone Holderbach, Leiterin der Produktentwicklung bei Oberserver, ihr Interesse an der Technik. Und wie funktioniert Alert? Das Spracherkennungssystem wird darauf getrimmt, Nachrichtensendungen in Radio und Fernsehen zu überwachen: Alles, was gesagt wird - sei es vom Nachrichtensprecher, Reporter oder Interviewten -, wird durch die automatische Spracherkennung in Text umgewandelt. Dabei werden Themen und Schlüsselwörter erkannt und gespeichert. Diese werden mit den Suchbegriffen des Nutzers verglichen. Gefundene Übereinstimmungen werden angezeigt und dem Benutzer automatisch mitgeteilt. Konventionelle Spracherkennungstechnik sei für die Medienbeobachtung nicht einsetzbar, da diese für einen anderen Zweck entwickelt worden sei, betont Prof. Gerhard Rigoll, Leiter des Fachgebiets Technische Informatik an der Duisburger Hochschule. Für die Umwandlung von Sprache in Text wurde die Alert-Software gründlich trainiert. Aus Zeitungstexten, Audio- und Video-Material wurden bislang rund 3 50 Millionen Wörter verarbeitet. Das System arbeitet in drei Sprachen. Doch so ganz fehlerfrei sei der automatisch gewonnene Text nicht, räumt Rigoll ein. Zurzeit liegt die Erkennungsrate bei 40 bis 70 Prozent. Und das wird sich in absehbarer Zeit auch nicht ändern." Musiküberlagerungen oder starke Hintergrundgeräusche bei Reportagen führen zu Ungenauigkeiten bei der Textumwandlung. Deshalb haben die, Duisburger Wissenschaftler Methoden entwickelt, die über die herkömmliche Suche nach Schlüsselwörtern hinausgehen und eine inhaltsorientierte Zuordnung ermöglichen. Dadurch erhält der Nutzer dann auch solche Nachrichten, die zwar zum Thema passen, in denen das Stichwort aber gar nicht auftaucht", bringt Rigoll den Vorteil der Technik auf den Punkt. Wird beispielsweise "Ölpreis" als Suchbegriff eingegeben, werden auch solche Nachrichten angezeigt, in denen Olkonzerne und Energieagenturen eine Rolle spielen. Rigoll: Das Alert-System liest sozusagen zwischen den Zeilen!' Das Forschungsprojekt wurde vor einem Jahr gestartet und läuft noch bis Mitte 2002. Wer sich über den Stand der Technik informieren möchte, kann dies in dieser Woche auf der Industriemesse in Hannover. Das Alert-System wird auf dem Gemeinschaftsstand "Forschungsland NRW" in Halle 18, Stand M12, präsentiert
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  18. Ontologie und Axiomatik der Wissensbasis von LILOG (1992) 0.03
    0.026432127 = product of:
      0.03964819 = sum of:
        0.014479323 = weight(_text_:s in 3957) [ClassicSimilarity], result of:
          0.014479323 = score(doc=3957,freq=6.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.29128504 = fieldWeight in 3957, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.109375 = fieldNorm(doc=3957)
        0.025168866 = product of:
          0.05033773 = sum of:
            0.05033773 = weight(_text_:von in 3957) [ClassicSimilarity], result of:
              0.05033773 = score(doc=3957,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.41267726 = fieldWeight in 3957, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3957)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Computational linguistics 19(1993) no.3, S.539-543 (J. Bateman)
    Pages
    X,253 S
    Type
    s
  19. Rieger, F.: Lügende Computer (2023) 0.03
    0.026106594 = product of:
      0.07831978 = sum of:
        0.07831978 = sum of:
          0.028764417 = weight(_text_:von in 912) [ClassicSimilarity], result of:
            0.028764417 = score(doc=912,freq=2.0), product of:
              0.12197845 = queryWeight, product of:
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.045719936 = queryNorm
              0.23581557 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6679487 = idf(docFreq=8340, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
          0.04955536 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
            0.04955536 = score(doc=912,freq=2.0), product of:
              0.16010343 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045719936 = queryNorm
              0.30952093 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
      0.33333334 = coord(1/3)
    
    Abstract
    Wir leben gerade in einem kritischen Übergangs-Zeitalter zwischen Computern, auf die man sich halbwegs verlassen kann und den neuen "AI"-Systemen, die driften, halluzinieren, lügen und fabulieren können. Schon heute ist die Komplexität moderner Softwaresysteme so hoch, dass es kühn wäre, von striktem Determinismus zu sprechen, jedoch sind auch komplexe Algorithmen darauf angelegt, bei gleichen Eingabedaten gleiche Ergebnisse zu produzieren. Eine Ausnahme sind heute schon Algorithmen, die Zufallszahlen als Teil ihrer Eingabeparameter beinhalten oder neuronale Netze.
    Date
    16. 3.2023 19:22:55
  20. Wettler, M.; Rapp, R.; Ferber, R.: Freie Assoziationen und Kontiguitäten von Wörtern in Texten (1993) 0.03
    0.025545528 = product of:
      0.03831829 = sum of:
        0.009553875 = weight(_text_:s in 2140) [ClassicSimilarity], result of:
          0.009553875 = score(doc=2140,freq=2.0), product of:
            0.049708433 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.045719936 = queryNorm
            0.19219826 = fieldWeight in 2140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.125 = fieldNorm(doc=2140)
        0.028764417 = product of:
          0.057528835 = sum of:
            0.057528835 = weight(_text_:von in 2140) [ClassicSimilarity], result of:
              0.057528835 = score(doc=2140,freq=2.0), product of:
                0.12197845 = queryWeight, product of:
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.045719936 = queryNorm
                0.47163114 = fieldWeight in 2140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6679487 = idf(docFreq=8340, maxDocs=44218)
                  0.125 = fieldNorm(doc=2140)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Zeitschrift für Psychologie. 201(1993), S.99-108

Languages

Types

  • a 593
  • m 90
  • el 45
  • s 36
  • x 15
  • d 3
  • b 2
  • p 2
  • pat 2
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications