Search (70 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.12189087 = sum of:
      0.09705355 = product of:
        0.29116064 = sum of:
          0.29116064 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.29116064 = score(doc=562,freq=2.0), product of:
              0.51806283 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.06110665 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.02483732 = product of:
        0.04967464 = sum of:
          0.04967464 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04967464 = score(doc=562,freq=2.0), product of:
              0.21398507 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06110665 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.05
    0.048526775 = product of:
      0.09705355 = sum of:
        0.09705355 = product of:
          0.29116064 = sum of:
            0.29116064 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.29116064 = score(doc=862,freq=2.0), product of:
                0.51806283 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.06110665 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.04
    0.041674867 = product of:
      0.083349735 = sum of:
        0.083349735 = sum of:
          0.05023331 = weight(_text_:wissen in 4217) [ClassicSimilarity], result of:
            0.05023331 = score(doc=4217,freq=2.0), product of:
              0.26354674 = queryWeight, product of:
                4.3128977 = idf(docFreq=1609, maxDocs=44218)
                0.06110665 = queryNorm
              0.19060494 = fieldWeight in 4217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3128977 = idf(docFreq=1609, maxDocs=44218)
                0.03125 = fieldNorm(doc=4217)
          0.03311643 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
            0.03311643 = score(doc=4217,freq=2.0), product of:
              0.21398507 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06110665 = queryNorm
              0.15476047 = fieldWeight in 4217, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=4217)
      0.5 = coord(1/2)
    
    Abstract
    Jetzt scheint es allmählich ans Eingemachte zu gehen. Ein von der chinesischen Alibaba-Gruppe entwickelte KI-Programm konnte erstmals Menschen in der Beantwortung von Fragen und dem Verständnis von Text schlagen. Die chinesische Regierung will das Land führend in der Entwicklung von Künstlicher Intelligenz machen und hat dafür eine nationale Strategie aufgestellt. Dazu ernannte das Ministerium für Wissenschaft und Technik die Internetkonzerne Baidu, Alibaba und Tencent sowie iFlyTek zum ersten nationalen Team für die Entwicklung der KI-Technik der nächsten Generation. Baidu ist zuständig für die Entwicklung autonomer Fahrzeuge, Alibaba für die Entwicklung von Clouds für "city brains" (Smart Cities sollen sich an ihre Einwohner und ihre Umgebung anpassen), Tencent für die Enwicklung von Computervision für medizinische Anwendungen und iFlyTec für "Stimmenintelligenz". Die vier Konzerne sollen offene Plattformen herstellen, die auch andere Firmen und Start-ups verwenden können. Überdies wird bei Peking für eine Milliarde US-Dollar ein Technologiepark für die Entwicklung von KI gebaut. Dabei geht es selbstverständlich nicht nur um zivile Anwendungen, sondern auch militärische. Noch gibt es in den USA mehr KI-Firmen, aber China liegt bereits an zweiter Stelle. Das Pentagon ist beunruhigt. Offenbar kommt China rasch vorwärts. Ende 2017 stellte die KI-Firma iFlyTek, die zunächst auf Stimmerkennung und digitale Assistenten spezialisiert war, einen Roboter vor, der den schriftlichen Test der nationalen Medizinprüfung erfolgreich bestanden hatte. Der Roboter war nicht nur mit immensem Wissen aus 53 medizinischen Lehrbüchern, 2 Millionen medizinischen Aufzeichnungen und 400.000 medizinischen Texten und Berichten gefüttert worden, er soll von Medizinexperten klinische Erfahrungen und Falldiagnosen übernommen haben. Eingesetzt werden soll er, in China herrscht vor allem auf dem Land, Ärztemangel, als Helfer, der mit der automatischen Auswertung von Patientendaten eine erste Diagnose erstellt und ansonsten Ärzten mit Vorschlägen zur Seite stehen.
    Date
    22. 1.2018 11:32:44
  4. Zimmermann, H.H.: Wortrelationierung in der Sprachtechnik : Stilhilfen, Retrievalhilfen, Übersetzungshilfen (1992) 0.04
    0.037674982 = product of:
      0.075349964 = sum of:
        0.075349964 = product of:
          0.15069993 = sum of:
            0.15069993 = weight(_text_:wissen in 1372) [ClassicSimilarity], result of:
              0.15069993 = score(doc=1372,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.57181484 = fieldWeight in 1372, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1372)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Kognitive Ansätze zum Ordnen und Darstellen von Wissen. 2. Tagung der Deutschen ISKO Sektion einschl. der Vorträge des Workshops "Thesauri als Werkzeuge der Sprachtechnologie", Weilburg, 15.-18.10.1991
  5. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.04
    0.036739893 = sum of:
      0.022251455 = product of:
        0.06675436 = sum of:
          0.06675436 = weight(_text_:objects in 1616) [ClassicSimilarity], result of:
            0.06675436 = score(doc=1616,freq=2.0), product of:
              0.3247862 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.06110665 = queryNorm
              0.20553327 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
        0.33333334 = coord(1/3)
      0.014488437 = product of:
        0.028976874 = sum of:
          0.028976874 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
            0.028976874 = score(doc=1616,freq=2.0), product of:
              0.21398507 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06110665 = queryNorm
              0.1354154 = fieldWeight in 1616, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1616)
        0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  6. Warner, A.J.: Natural language processing (1987) 0.03
    0.03311643 = product of:
      0.06623286 = sum of:
        0.06623286 = product of:
          0.13246572 = sum of:
            0.13246572 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.13246572 = score(doc=337,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  7. Sokirko, A.V.: Programnaya realizatsiya Russkogo abshchesemanticheskogo slovarya (1997) 0.03
    0.031787798 = product of:
      0.063575596 = sum of:
        0.063575596 = product of:
          0.19072677 = sum of:
            0.19072677 = weight(_text_:objects in 2258) [ClassicSimilarity], result of:
              0.19072677 = score(doc=2258,freq=2.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.58723795 = fieldWeight in 2258, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2258)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the Dolphi2 for Windows software which has been used for the development of the Russian Semantic Dictionay ROSS. Although not a relational database as such, Dolphi actively uses standard objects of relational databases
  8. Luckhardt, H.-D.: Klassifikationen und Thesauri für automatische Terminologie-Unterstützung, maschinelle Übersetzung und computergestützte Übersetzung (1992) 0.03
    0.03139582 = product of:
      0.06279164 = sum of:
        0.06279164 = product of:
          0.12558328 = sum of:
            0.12558328 = weight(_text_:wissen in 1371) [ClassicSimilarity], result of:
              0.12558328 = score(doc=1371,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.47651234 = fieldWeight in 1371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1371)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Kognitive Ansätze zum Ordnen und Darstellen von Wissen. 2. Tagung der Deutschen ISKO Sektion einschl. der Vorträge des Workshops "Thesauri als Werkzeuge der Sprachtechnologie", Weilburg, 15.-18.10.1991
  9. Wolfangel, E.: Ich verstehe (2017) 0.03
    0.03139582 = product of:
      0.06279164 = sum of:
        0.06279164 = product of:
          0.12558328 = sum of:
            0.12558328 = weight(_text_:wissen in 3976) [ClassicSimilarity], result of:
              0.12558328 = score(doc=3976,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.47651234 = fieldWeight in 3976, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3976)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Wissen: Technik, Forschung, Umwelt, Mensch
  10. Barthel, J.; Ciesielski, R.: Regeln zu ChatGPT an Unis oft unklar : KI in der Bildung (2023) 0.03
    0.03139582 = product of:
      0.06279164 = sum of:
        0.06279164 = product of:
          0.12558328 = sum of:
            0.12558328 = weight(_text_:wissen in 925) [ClassicSimilarity], result of:
              0.12558328 = score(doc=925,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.47651234 = fieldWeight in 925, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.078125 = fieldNorm(doc=925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    https://www.tagesschau.de/wissen/technologie/ki-chatgpt-uni-wissenschaft-101.html
  11. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.1159075 = score(doc=3164,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  12. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.1159075 = score(doc=4506,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  13. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.1159075 = score(doc=6672,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  14. New tools for human translators (1997) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.1159075 = score(doc=1179,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.1159075 = score(doc=3117,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  16. ¬Der Student aus dem Computer (2023) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.1159075 = score(doc=1079,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
  17. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.03
    0.025116654 = product of:
      0.05023331 = sum of:
        0.05023331 = product of:
          0.10046662 = sum of:
            0.10046662 = weight(_text_:wissen in 5218) [ClassicSimilarity], result of:
              0.10046662 = score(doc=5218,freq=8.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.38120988 = fieldWeight in 5218, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
  18. Becks, D.; Schulz, J.M.: Domänenübergreifende Phrasenextraktion mithilfe einer lexikonunabhängigen Analysekomponente (2010) 0.03
    0.025116654 = product of:
      0.05023331 = sum of:
        0.05023331 = product of:
          0.10046662 = sum of:
            0.10046662 = weight(_text_:wissen in 4661) [ClassicSimilarity], result of:
              0.10046662 = score(doc=4661,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.38120988 = fieldWeight in 4661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4661)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  19. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.02483732 = product of:
      0.04967464 = sum of:
        0.04967464 = product of:
          0.09934928 = sum of:
            0.09934928 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.09934928 = score(doc=4483,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  20. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.02483732 = product of:
      0.04967464 = sum of:
        0.04967464 = product of:
          0.09934928 = sum of:
            0.09934928 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.09934928 = score(doc=4888,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22

Years

Languages

  • e 40
  • d 28
  • m 1
  • ru 1
  • More… Less…

Types

  • a 50
  • m 10
  • el 9
  • s 5
  • x 3
  • p 2
  • d 1
  • More… Less…

Classifications