Search (437 results, page 1 of 22)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.22
    0.2177067 = product of:
      0.52871627 = sum of:
        0.039654292 = product of:
          0.11896288 = sum of:
            0.11896288 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.11896288 = score(doc=562,freq=2.0), product of:
                0.21167092 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.024967048 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.11896288 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11896288 = score(doc=562,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.11896288 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11896288 = score(doc=562,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.11896288 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11896288 = score(doc=562,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.0030624135 = weight(_text_:in in 562) [ClassicSimilarity], result of:
          0.0030624135 = score(doc=562,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.09017298 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.11896288 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11896288 = score(doc=562,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.010148071 = product of:
          0.020296142 = sum of:
            0.020296142 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.020296142 = score(doc=562,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.4117647 = coord(7/17)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.18
    0.18381532 = product of:
      0.52081007 = sum of:
        0.039654292 = product of:
          0.11896288 = sum of:
            0.11896288 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.11896288 = score(doc=862,freq=2.0), product of:
                0.21167092 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.024967048 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.11896288 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.11896288 = score(doc=862,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.11896288 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.11896288 = score(doc=862,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.11896288 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.11896288 = score(doc=862,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.005304256 = weight(_text_:in in 862) [ClassicSimilarity], result of:
          0.005304256 = score(doc=862,freq=6.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.1561842 = fieldWeight in 862, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.11896288 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.11896288 = score(doc=862,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.3529412 = coord(6/17)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.17
    0.17394613 = product of:
      0.49284735 = sum of:
        0.11896288 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.11896288 = score(doc=563,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.11896288 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.11896288 = score(doc=563,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.11896288 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.11896288 = score(doc=563,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.0068477658 = weight(_text_:in in 563) [ClassicSimilarity], result of:
          0.0068477658 = score(doc=563,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.20163295 = fieldWeight in 563, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.11896288 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.11896288 = score(doc=563,freq=2.0), product of:
            0.21167092 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.024967048 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.010148071 = product of:
          0.020296142 = sum of:
            0.020296142 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.020296142 = score(doc=563,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.3529412 = coord(6/17)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.01
    0.013129208 = product of:
      0.074398845 = sum of:
        0.055413805 = weight(_text_:informationswissenschaft in 5483) [ClassicSimilarity], result of:
          0.055413805 = score(doc=5483,freq=4.0), product of:
            0.11246919 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.024967048 = queryNorm
            0.4927021 = fieldWeight in 5483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.0071456316 = weight(_text_:in in 5483) [ClassicSimilarity], result of:
          0.0071456316 = score(doc=5483,freq=8.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.21040362 = fieldWeight in 5483, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.011839416 = product of:
          0.023678832 = sum of:
            0.023678832 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.023678832 = score(doc=5483,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.1764706 = coord(3/17)
    
    Abstract
    This paper gives an outline of the final results of the TransRouter project. In the scope of this project a decision support system for translation managers has been developed, which will support the selection of appropriate routes for translation projects. In this paper emphasis is put on the decision model, which is based on a stepwise refined assessment of translation routes. The workflow of using this system is considered as well
    Date
    10.12.2000 18:22:35
    Series
    Schriften zur Informationswissenschaft; Bd.38
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  5. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.01
    0.011154163 = product of:
      0.06320692 = sum of:
        0.018970713 = weight(_text_:und in 3510) [ClassicSimilarity], result of:
          0.018970713 = score(doc=3510,freq=8.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.34282678 = fieldWeight in 3510, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.03918348 = weight(_text_:informationswissenschaft in 3510) [ClassicSimilarity], result of:
          0.03918348 = score(doc=3510,freq=2.0), product of:
            0.11246919 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.024967048 = queryNorm
            0.348393 = fieldWeight in 3510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.005052725 = weight(_text_:in in 3510) [ClassicSimilarity], result of:
          0.005052725 = score(doc=3510,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.14877784 = fieldWeight in 3510, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
      0.1764706 = coord(3/17)
    
    Series
    Fortschritte in der Wissensorganisation; Bd.13
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  6. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.01
    0.009891421 = product of:
      0.05605138 = sum of:
        0.023851763 = weight(_text_:buch in 5218) [ClassicSimilarity], result of:
          0.023851763 = score(doc=5218,freq=2.0), product of:
            0.11608105 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.024967048 = queryNorm
            0.20547508 = fieldWeight in 5218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.024838492 = weight(_text_:und in 5218) [ClassicSimilarity], result of:
          0.024838492 = score(doc=5218,freq=42.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.44886562 = fieldWeight in 5218, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.0073611266 = weight(_text_:in in 5218) [ClassicSimilarity], result of:
          0.0073611266 = score(doc=5218,freq=26.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.2167489 = fieldWeight in 5218, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
      0.1764706 = coord(3/17)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
  7. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.01
    0.0098155625 = product of:
      0.08343228 = sum of:
        0.01626061 = weight(_text_:und in 733) [ClassicSimilarity], result of:
          0.01626061 = score(doc=733,freq=2.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.29385152 = fieldWeight in 733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=733)
        0.06717167 = weight(_text_:informationswissenschaft in 733) [ClassicSimilarity], result of:
          0.06717167 = score(doc=733,freq=2.0), product of:
            0.11246919 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.024967048 = queryNorm
            0.5972451 = fieldWeight in 733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.09375 = fieldNorm(doc=733)
      0.11764706 = coord(2/17)
    
    Series
    Schriften zur Informationswissenschaft; Bd.30
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  8. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.007185932 = product of:
      0.06108042 = sum of:
        0.055976395 = weight(_text_:informationswissenschaft in 2995) [ClassicSimilarity], result of:
          0.055976395 = score(doc=2995,freq=2.0), product of:
            0.11246919 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.024967048 = queryNorm
            0.49770427 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
        0.005104023 = weight(_text_:in in 2995) [ClassicSimilarity], result of:
          0.005104023 = score(doc=2995,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.15028831 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
      0.11764706 = coord(2/17)
    
    Series
    Schriften zur Informationswissenschaft; Bd.66
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  9. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.01
    0.0060974653 = product of:
      0.051828455 = sum of:
        0.04749755 = weight(_text_:informationswissenschaft in 5480) [ClassicSimilarity], result of:
          0.04749755 = score(doc=5480,freq=4.0), product of:
            0.11246919 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.024967048 = queryNorm
            0.42231607 = fieldWeight in 5480, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
        0.004330907 = weight(_text_:in in 5480) [ClassicSimilarity], result of:
          0.004330907 = score(doc=5480,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.12752387 = fieldWeight in 5480, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
      0.11764706 = coord(2/17)
    
    Abstract
    (Automatic) document classification is generally defined as content-based assignment of one or more predefined categories to documents. Usually, machine learning, statistical pattern recognition, or neural network approaches are used to construct classifiers automatically. In this paper we thoroughly evaluate a wide variety of these methods on a document classification task for German text. We evaluate different feature construction and selection methods and various classifiers. Our main results are: (1) feature selection is necessary not only to reduce learning and classification time, but also to avoid overfitting (even for Support Vector Machines); (2) surprisingly, our morphological analysis does not improve classification quality compared to a letter 5-gram approach; (3) Support Vector Machines are significantly better than all other classification methods
    Series
    Schriften zur Informationswissenschaft; Bd.38
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  10. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.00
    0.004300794 = product of:
      0.03655675 = sum of:
        0.01626061 = weight(_text_:und in 4483) [ClassicSimilarity], result of:
          0.01626061 = score(doc=4483,freq=2.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.29385152 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.020296142 = product of:
          0.040592283 = sum of:
            0.040592283 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.040592283 = score(doc=4483,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Date
    15. 3.2000 10:22:37
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  11. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.00
    0.0031083494 = product of:
      0.02642097 = sum of:
        0.006124827 = weight(_text_:in in 4888) [ClassicSimilarity], result of:
          0.006124827 = score(doc=4888,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.18034597 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.020296142 = product of:
          0.040592283 = sum of:
            0.040592283 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.040592283 = score(doc=4888,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Date
    1. 3.2013 14:56:22
  12. Ontologie und Axiomatik der Wissensbasis von LILOG (1992) 0.00
    0.0030725112 = product of:
      0.026116345 = sum of:
        0.018970713 = weight(_text_:und in 3957) [ClassicSimilarity], result of:
          0.018970713 = score(doc=3957,freq=2.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.34282678 = fieldWeight in 3957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=3957)
        0.0071456316 = weight(_text_:in in 3957) [ClassicSimilarity], result of:
          0.0071456316 = score(doc=3957,freq=2.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.21040362 = fieldWeight in 3957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=3957)
      0.11764706 = coord(2/17)
    
    Footnote
    Rez. in: Computational linguistics 19(1993) no.3, S.539-543 (J. Bateman)
  13. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.00
    0.0028390156 = product of:
      0.024131631 = sum of:
        0.007218178 = weight(_text_:in in 1463) [ClassicSimilarity], result of:
          0.007218178 = score(doc=1463,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.21253976 = fieldWeight in 1463, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1463)
        0.016913453 = product of:
          0.033826906 = sum of:
            0.033826906 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.033826906 = score(doc=1463,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Chronicles the early history of applying electronic computers to the task of translating natural languages, from the 1st suggestions by Warren Weaver in Mar 1947 to the 1st demonstration of a working, if limited, program in Jan 1954
    Date
    31. 7.1996 9:22:19
  14. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.00
    0.0026660135 = product of:
      0.022661116 = sum of:
        0.009130354 = weight(_text_:in in 8521) [ClassicSimilarity], result of:
          0.009130354 = score(doc=8521,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.26884392 = fieldWeight in 8521, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=8521)
        0.013530762 = product of:
          0.027061524 = sum of:
            0.027061524 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.027061524 = score(doc=8521,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Presents the state of the art in lexical choice research in text generation and machine translation. Discusses the existing implementations with respect to: the place of lexical choice in the overall generation rates; the information flow within the generation process and the consequences thereof for lexical choice; the internal organization of the lexical choice process; and the phenomena covered by lexical choice. Identifies possible future directions in lexical choice research
    Date
    31. 7.1996 9:22:19
  15. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.00
    0.0025087968 = product of:
      0.021324772 = sum of:
        0.0094853565 = weight(_text_:und in 156) [ClassicSimilarity], result of:
          0.0094853565 = score(doc=156,freq=2.0), product of:
            0.055336144 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024967048 = queryNorm
            0.17141339 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.011839416 = product of:
          0.023678832 = sum of:
            0.023678832 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.023678832 = score(doc=156,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Date
    8. 3.2007 19:55:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  16. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.00
    0.0024238946 = product of:
      0.020603104 = sum of:
        0.0070723416 = weight(_text_:in in 6753) [ClassicSimilarity], result of:
          0.0070723416 = score(doc=6753,freq=6.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.2082456 = fieldWeight in 6753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
        0.013530762 = product of:
          0.027061524 = sum of:
            0.027061524 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.027061524 = score(doc=6753,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
  17. Morris, V.: Automated language identification of bibliographic resources (2020) 0.00
    0.0024238946 = product of:
      0.020603104 = sum of:
        0.0070723416 = weight(_text_:in in 5749) [ClassicSimilarity], result of:
          0.0070723416 = score(doc=5749,freq=6.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.2082456 = fieldWeight in 5749, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.013530762 = product of:
          0.027061524 = sum of:
            0.027061524 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.027061524 = score(doc=5749,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  18. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.00
    0.002332762 = product of:
      0.019828476 = sum of:
        0.00798906 = weight(_text_:in in 3840) [ClassicSimilarity], result of:
          0.00798906 = score(doc=3840,freq=10.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.23523843 = fieldWeight in 3840, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3840)
        0.011839416 = product of:
          0.023678832 = sum of:
            0.023678832 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.023678832 = score(doc=3840,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
  19. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.00
    0.0022712122 = product of:
      0.019305304 = sum of:
        0.0057745427 = weight(_text_:in in 6752) [ClassicSimilarity], result of:
          0.0057745427 = score(doc=6752,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.17003182 = fieldWeight in 6752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.013530762 = product of:
          0.027061524 = sum of:
            0.027061524 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.027061524 = score(doc=6752,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  20. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.00
    0.0022712122 = product of:
      0.019305304 = sum of:
        0.0057745427 = weight(_text_:in in 7415) [ClassicSimilarity], result of:
          0.0057745427 = score(doc=7415,freq=4.0), product of:
            0.033961542 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.024967048 = queryNorm
            0.17003182 = fieldWeight in 7415, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.013530762 = product of:
          0.027061524 = sum of:
            0.027061524 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.027061524 = score(doc=7415,freq=2.0), product of:
                0.08743035 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024967048 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly

Years

Languages

Types

  • a 357
  • m 46
  • el 41
  • s 19
  • p 7
  • x 4
  • b 1
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications