Search (363 results, page 1 of 19)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.24
    0.24404049 = product of:
      0.6101012 = sum of:
        0.046024837 = product of:
          0.1380745 = sum of:
            0.1380745 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1380745 = score(doc=562,freq=2.0), product of:
                0.24567628 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028978055 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.1380745 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1380745 = score(doc=562,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1380745 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1380745 = score(doc=562,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1380745 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1380745 = score(doc=562,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1380745 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1380745 = score(doc=562,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.011778379 = product of:
          0.023556758 = sum of:
            0.023556758 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.023556758 = score(doc=562,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.4 = coord(6/15)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.20
    0.19944096 = product of:
      0.59832287 = sum of:
        0.046024837 = product of:
          0.1380745 = sum of:
            0.1380745 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.1380745 = score(doc=862,freq=2.0), product of:
                0.24567628 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.028978055 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.1380745 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1380745 = score(doc=862,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1380745 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1380745 = score(doc=862,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1380745 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1380745 = score(doc=862,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.1380745 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1380745 = score(doc=862,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.33333334 = coord(5/15)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.19
    0.1939249 = product of:
      0.5817747 = sum of:
        0.1380745 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1380745 = score(doc=563,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.1380745 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1380745 = score(doc=563,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.1380745 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1380745 = score(doc=563,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.1380745 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1380745 = score(doc=563,freq=2.0), product of:
            0.24567628 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.028978055 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.029476684 = sum of:
          0.005919926 = weight(_text_:information in 563) [ClassicSimilarity], result of:
            0.005919926 = score(doc=563,freq=2.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.116372846 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
          0.023556758 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
            0.023556758 = score(doc=563,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.23214069 = fieldWeight in 563, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=563)
      0.33333334 = coord(5/15)
    
    Abstract
    In this thesis we propose three new word association measures for multi-word term extraction. We combine these association measures with LocalMaxs algorithm in our extraction model and compare the results of different multi-word term extraction methods. Our approach is language and domain independent and requires no training data. It can be applied to such tasks as text summarization, information retrieval, and document classification. We further explore the potential of using multi-word terms as an effective representation for general web-page summarization. We extract multi-word terms from human written summaries in a large collection of web-pages, and generate the summaries by aligning document words with these multi-word terms. Our system applies machine translation technology to learn the aligning process from a training set and focuses on selecting high quality multi-word terms from human written summaries to generate suitable results for web-page summarization.
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.02
    0.020551154 = product of:
      0.10275577 = sum of:
        0.018872911 = weight(_text_:und in 733) [ClassicSimilarity], result of:
          0.018872911 = score(doc=733,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.29385152 = fieldWeight in 733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=733)
        0.077962935 = weight(_text_:informationswissenschaft in 733) [ClassicSimilarity], result of:
          0.077962935 = score(doc=733,freq=2.0), product of:
            0.13053758 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.028978055 = queryNorm
            0.5972451 = fieldWeight in 733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.09375 = fieldNorm(doc=733)
        0.005919926 = product of:
          0.011839852 = sum of:
            0.011839852 = weight(_text_:information in 733) [ClassicSimilarity], result of:
              0.011839852 = score(doc=733,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.23274569 = fieldWeight in 733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=733)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Series
    Schriften zur Informationswissenschaft; Bd.30
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  5. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.01
    0.014880672 = product of:
      0.07440336 = sum of:
        0.022018395 = weight(_text_:und in 3510) [ClassicSimilarity], result of:
          0.022018395 = score(doc=3510,freq=8.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.34282678 = fieldWeight in 3510, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.04547838 = weight(_text_:informationswissenschaft in 3510) [ClassicSimilarity], result of:
          0.04547838 = score(doc=3510,freq=2.0), product of:
            0.13053758 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.028978055 = queryNorm
            0.348393 = fieldWeight in 3510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.00690658 = product of:
          0.01381316 = sum of:
            0.01381316 = weight(_text_:information in 3510) [ClassicSimilarity], result of:
              0.01381316 = score(doc=3510,freq=8.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27153665 = fieldWeight in 3510, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
      0.2 = coord(3/15)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  6. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.0110307345 = product of:
      0.0827305 = sum of:
        0.018872911 = weight(_text_:und in 4483) [ClassicSimilarity], result of:
          0.018872911 = score(doc=4483,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.29385152 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.06385759 = sum of:
          0.016744079 = weight(_text_:information in 4483) [ClassicSimilarity], result of:
            0.016744079 = score(doc=4483,freq=4.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.3291521 = fieldWeight in 4483, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.047113515 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.047113515 = score(doc=4483,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.13333334 = coord(2/15)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.01
    0.010407678 = product of:
      0.07805758 = sum of:
        0.06431614 = weight(_text_:informationswissenschaft in 5483) [ClassicSimilarity], result of:
          0.06431614 = score(doc=5483,freq=4.0), product of:
            0.13053758 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.028978055 = queryNorm
            0.4927021 = fieldWeight in 5483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.013741443 = product of:
          0.027482886 = sum of:
            0.027482886 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.027482886 = score(doc=5483,freq=2.0), product of:
                0.101476215 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028978055 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Date
    10.12.2000 18:22:35
    Series
    Schriften zur Informationswissenschaft; Bd.38
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  8. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.009592776 = product of:
      0.07194582 = sum of:
        0.064969115 = weight(_text_:informationswissenschaft in 2995) [ClassicSimilarity], result of:
          0.064969115 = score(doc=2995,freq=2.0), product of:
            0.13053758 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.028978055 = queryNorm
            0.49770427 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
        0.0069766995 = product of:
          0.013953399 = sum of:
            0.013953399 = weight(_text_:information in 2995) [ClassicSimilarity], result of:
              0.013953399 = score(doc=2995,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27429342 = fieldWeight in 2995, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2995)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Series
    Schriften zur Informationswissenschaft; Bd.66
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  9. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.01
    0.007534993 = product of:
      0.056512445 = sum of:
        0.027683599 = weight(_text_:buch in 5218) [ClassicSimilarity], result of:
          0.027683599 = score(doc=5218,freq=2.0), product of:
            0.13472971 = queryWeight, product of:
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.028978055 = queryNorm
            0.20547508 = fieldWeight in 5218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.64937 = idf(docFreq=1149, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.028828848 = weight(_text_:und in 5218) [ClassicSimilarity], result of:
          0.028828848 = score(doc=5218,freq=42.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.44886562 = fieldWeight in 5218, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
      0.13333334 = coord(2/15)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
  10. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.0060531553 = product of:
      0.045398664 = sum of:
        0.011009198 = weight(_text_:und in 156) [ClassicSimilarity], result of:
          0.011009198 = score(doc=156,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.17141339 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.034389466 = sum of:
          0.00690658 = weight(_text_:information in 156) [ClassicSimilarity], result of:
            0.00690658 = score(doc=156,freq=2.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.13576832 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
          0.027482886 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.027482886 = score(doc=156,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
      0.13333334 = coord(2/15)
    
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  11. Warner, A.J.: Natural language processing (1987) 0.01
    0.0052403 = product of:
      0.0786045 = sum of:
        0.0786045 = sum of:
          0.015786469 = weight(_text_:information in 337) [ClassicSimilarity], result of:
            0.015786469 = score(doc=337,freq=2.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.3103276 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
          0.06281803 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
            0.06281803 = score(doc=337,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.61904186 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
      0.06666667 = coord(1/15)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  12. Schwarz, C.: Natural language and information retrieval : Kommentierte Literaturliste zu Systemen, Verfahren und Tools (1986) 0.00
    0.0038566636 = product of:
      0.028924976 = sum of:
        0.022018395 = weight(_text_:und in 408) [ClassicSimilarity], result of:
          0.022018395 = score(doc=408,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.34282678 = fieldWeight in 408, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=408)
        0.00690658 = product of:
          0.01381316 = sum of:
            0.01381316 = weight(_text_:information in 408) [ClassicSimilarity], result of:
              0.01381316 = score(doc=408,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.27153665 = fieldWeight in 408, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.109375 = fieldNorm(doc=408)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
  13. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.00
    0.0036752082 = product of:
      0.05512812 = sum of:
        0.05512812 = weight(_text_:informationswissenschaft in 5480) [ClassicSimilarity], result of:
          0.05512812 = score(doc=5480,freq=4.0), product of:
            0.13053758 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.028978055 = queryNorm
            0.42231607 = fieldWeight in 5480, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
      0.06666667 = coord(1/15)
    
    Series
    Schriften zur Informationswissenschaft; Bd.38
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  14. Hutchins, W.J.; Somers, H.L.: ¬An introduction to machine translation (1992) 0.00
    0.0034307055 = product of:
      0.02573029 = sum of:
        0.022241939 = weight(_text_:und in 4512) [ClassicSimilarity], result of:
          0.022241939 = score(doc=4512,freq=16.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.34630734 = fieldWeight in 4512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4512)
        0.0034883497 = product of:
          0.0069766995 = sum of:
            0.0069766995 = weight(_text_:information in 4512) [ClassicSimilarity], result of:
              0.0069766995 = score(doc=4512,freq=4.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.13714671 = fieldWeight in 4512, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4512)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    The translation of foreign language texts by computers was one of the first tasks that the pioneers of Computing and Artificial Intelligence set themselves. Machine translation is again becoming an importantfield of research and development as the need for translations of technical and commercial documentation is growing well beyond the capacity of the translation profession.This is the first textbook of machine translation, providing a full course on both general machine translation systems characteristics and the computational linguistic foundations of the field. The book assumes no previous knowledge of machine translation and provides the basic background information to the linguistic and computational linguistics, artificial intelligence, natural language processing and information science.
    Classification
    ES 960 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Übersetzung
    RVK
    ES 960 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Übersetzung
  15. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.00
    0.0030503988 = product of:
      0.04575598 = sum of:
        0.04575598 = sum of:
          0.018273093 = weight(_text_:information in 3840) [ClassicSimilarity], result of:
            0.018273093 = score(doc=3840,freq=14.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.3592092 = fieldWeight in 3840, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3840)
          0.027482886 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
            0.027482886 = score(doc=3840,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.2708308 = fieldWeight in 3840, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3840)
      0.06666667 = coord(1/15)
    
    Abstract
    Linguistics is the scientific study of language which emphasizes language spoken in everyday settings by human beings. It has a long history of interdisciplinarity, both internally and in contribution to other fields, including information science. A linguistic perspective is beneficial in many ways in information science, since it examines the relationship between the forms of meaningful expressions and their social, cognitive, institutional, and communicative context, these being two perspectives on information that are actively studied, to different degrees, in information science. Examples of issues relevant to information science are presented for which the approach taken under a linguistic perspective is illustrated.
    Date
    27. 8.2011 14:22:33
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  16. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.00
    0.0030053668 = product of:
      0.045080498 = sum of:
        0.045080498 = sum of:
          0.013671484 = weight(_text_:information in 6752) [ClassicSimilarity], result of:
            0.013671484 = score(doc=6752,freq=6.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.2687516 = fieldWeight in 6752, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
          0.031409014 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
            0.031409014 = score(doc=6752,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.30952093 = fieldWeight in 6752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
      0.06666667 = coord(1/15)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  17. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.00
    0.0028617647 = product of:
      0.042926468 = sum of:
        0.042926468 = sum of:
          0.015443583 = weight(_text_:information in 2345) [ClassicSimilarity], result of:
            0.015443583 = score(doc=2345,freq=10.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.3035872 = fieldWeight in 2345, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
          0.027482886 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
            0.027482886 = score(doc=2345,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.2708308 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
      0.06666667 = coord(1/15)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  18. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.00
    0.0028381154 = product of:
      0.04257173 = sum of:
        0.04257173 = sum of:
          0.011162719 = weight(_text_:information in 7415) [ClassicSimilarity], result of:
            0.011162719 = score(doc=7415,freq=4.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.21943474 = fieldWeight in 7415, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
          0.031409014 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
            0.031409014 = score(doc=7415,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.30952093 = fieldWeight in 7415, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7415)
      0.06666667 = coord(1/15)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
    Source
    Annual review of information science and technology. 31(1996), S.83-119
  19. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.00
    0.00275476 = product of:
      0.020660698 = sum of:
        0.015727427 = weight(_text_:und in 1810) [ClassicSimilarity], result of:
          0.015727427 = score(doc=1810,freq=2.0), product of:
            0.06422601 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.028978055 = queryNorm
            0.24487628 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.0049332716 = product of:
          0.009866543 = sum of:
            0.009866543 = weight(_text_:information in 1810) [ClassicSimilarity], result of:
              0.009866543 = score(doc=1810,freq=2.0), product of:
                0.050870337 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.028978055 = queryNorm
                0.19395474 = fieldWeight in 1810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1810)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Content
    Trägerin des VFI-Dissertationspreises 2014: "Überzeugende gründliche linguistische und quantitative Analyse eines im Information Retrieval bisher wenig beachteten Textelementes anhand eines eigens erstellten grossen Hypertextkorpus, einschliesslich der Evaluation selbsterstellter Auflösungsregeln für die Nutzung in künftigen IR-Systemen.".
  20. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.00
    0.00262015 = product of:
      0.03930225 = sum of:
        0.03930225 = sum of:
          0.0078932345 = weight(_text_:information in 8521) [ClassicSimilarity], result of:
            0.0078932345 = score(doc=8521,freq=2.0), product of:
              0.050870337 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.028978055 = queryNorm
              0.1551638 = fieldWeight in 8521, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0625 = fieldNorm(doc=8521)
          0.031409014 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
            0.031409014 = score(doc=8521,freq=2.0), product of:
              0.101476215 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.028978055 = queryNorm
              0.30952093 = fieldWeight in 8521, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=8521)
      0.06666667 = coord(1/15)
    
    Abstract
    Presents the state of the art in lexical choice research in text generation and machine translation. Discusses the existing implementations with respect to: the place of lexical choice in the overall generation rates; the information flow within the generation process and the consequences thereof for lexical choice; the internal organization of the lexical choice process; and the phenomena covered by lexical choice. Identifies possible future directions in lexical choice research
    Date
    31. 7.1996 9:22:19

Authors

Languages

Types

  • a 323
  • m 22
  • el 16
  • s 13
  • x 4
  • p 3
  • More… Less…

Subjects

Classifications