Search (153 results, page 1 of 8)

  • × theme_ss:"Computerlinguistik"
  • × language_ss:"e"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.16541429 = product of:
      0.56713474 = sum of:
        0.034887902 = product of:
          0.10466371 = sum of:
            0.10466371 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.10466371 = score(doc=562,freq=2.0), product of:
                0.18622838 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021966046 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.10466371 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.10466371 = score(doc=562,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.10466371 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.10466371 = score(doc=562,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.10466371 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.10466371 = score(doc=562,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.10466371 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.10466371 = score(doc=562,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.10466371 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.10466371 = score(doc=562,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.008928288 = product of:
          0.017856576 = sum of:
            0.017856576 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.017856576 = score(doc=562,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.29166666 = coord(7/24)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.16
    0.1646404 = product of:
      0.5644814 = sum of:
        0.034887902 = product of:
          0.10466371 = sum of:
            0.10466371 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.10466371 = score(doc=862,freq=2.0), product of:
                0.18622838 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.021966046 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.006274925 = product of:
          0.018824775 = sum of:
            0.018824775 = weight(_text_:p in 862) [ClassicSimilarity], result of:
              0.018824775 = score(doc=862,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.23835106 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.10466371 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10466371 = score(doc=862,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10466371 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10466371 = score(doc=862,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10466371 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10466371 = score(doc=862,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10466371 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10466371 = score(doc=862,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.10466371 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.10466371 = score(doc=862,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.29166666 = coord(7/24)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    p
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.13
    0.1330617 = product of:
      0.5322468 = sum of:
        0.10466371 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10466371 = score(doc=563,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10466371 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10466371 = score(doc=563,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10466371 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10466371 = score(doc=563,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10466371 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10466371 = score(doc=563,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.10466371 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.10466371 = score(doc=563,freq=2.0), product of:
            0.18622838 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.021966046 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.008928288 = product of:
          0.017856576 = sum of:
            0.017856576 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.017856576 = score(doc=563,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.25 = coord(6/24)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.02
    0.020477887 = product of:
      0.09829386 = sum of:
        0.021852942 = weight(_text_:und in 5218) [ClassicSimilarity], result of:
          0.021852942 = score(doc=5218,freq=42.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.44886562 = fieldWeight in 5218, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.014889815 = weight(_text_:des in 5218) [ClassicSimilarity], result of:
          0.014889815 = score(doc=5218,freq=8.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.24477452 = fieldWeight in 5218, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.025166549 = weight(_text_:fragen in 5218) [ClassicSimilarity], result of:
          0.025166549 = score(doc=5218,freq=2.0), product of:
            0.11184209 = queryWeight, product of:
              5.0915895 = idf(docFreq=738, maxDocs=44218)
              0.021966046 = queryNorm
            0.22501859 = fieldWeight in 5218, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0915895 = idf(docFreq=738, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.014531613 = weight(_text_:der in 5218) [ClassicSimilarity], result of:
          0.014531613 = score(doc=5218,freq=18.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.29615843 = fieldWeight in 5218, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
        0.021852942 = weight(_text_:und in 5218) [ClassicSimilarity], result of:
          0.021852942 = score(doc=5218,freq=42.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.44886562 = fieldWeight in 5218, product of:
              6.4807405 = tf(freq=42.0), with freq of:
                42.0 = termFreq=42.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=5218)
      0.20833333 = coord(5/24)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
  5. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.02
    0.01959046 = product of:
      0.09403421 = sum of:
        0.016690461 = weight(_text_:und in 3510) [ClassicSimilarity], result of:
          0.016690461 = score(doc=3510,freq=8.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.34282678 = fieldWeight in 3510, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.018425206 = weight(_text_:des in 3510) [ClassicSimilarity], result of:
          0.018425206 = score(doc=3510,freq=4.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.302893 = fieldWeight in 3510, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.031717185 = weight(_text_:der in 3510) [ClassicSimilarity], result of:
          0.031717185 = score(doc=3510,freq=28.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.6464053 = fieldWeight in 3510, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
        0.010510888 = product of:
          0.021021776 = sum of:
            0.021021776 = weight(_text_:29 in 3510) [ClassicSimilarity], result of:
              0.021021776 = score(doc=3510,freq=2.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.27205724 = fieldWeight in 3510, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3510)
          0.5 = coord(1/2)
        0.016690461 = weight(_text_:und in 3510) [ClassicSimilarity], result of:
          0.016690461 = score(doc=3510,freq=8.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.34282678 = fieldWeight in 3510, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
      0.20833333 = coord(5/24)
    
    Series
    Fortschritte in der Wissensorganisation; Bd.13
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  6. Sprachtechnologie, mobile Kommunikation und linguistische Ressourcen : Beiträge zur GLDV Tagung 2005 in Bonn (2005) 0.01
    0.014596508 = product of:
      0.07006324 = sum of:
        0.0118619995 = weight(_text_:und in 3578) [ClassicSimilarity], result of:
          0.0118619995 = score(doc=3578,freq=22.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24364883 = fieldWeight in 3578, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.012485489 = weight(_text_:des in 3578) [ClassicSimilarity], result of:
          0.012485489 = score(doc=3578,freq=10.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.20524967 = fieldWeight in 3578, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.018874912 = weight(_text_:fragen in 3578) [ClassicSimilarity], result of:
          0.018874912 = score(doc=3578,freq=2.0), product of:
            0.11184209 = queryWeight, product of:
              5.0915895 = idf(docFreq=738, maxDocs=44218)
              0.021966046 = queryNorm
            0.16876394 = fieldWeight in 3578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0915895 = idf(docFreq=738, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.014978844 = weight(_text_:der in 3578) [ClassicSimilarity], result of:
          0.014978844 = score(doc=3578,freq=34.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.30527312 = fieldWeight in 3578, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
        0.0118619995 = weight(_text_:und in 3578) [ClassicSimilarity], result of:
          0.0118619995 = score(doc=3578,freq=22.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24364883 = fieldWeight in 3578, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3578)
      0.20833333 = coord(5/24)
    
    Abstract
    Die Sprachtechnologie wird mobil. Sprachtechnologische Anwendungen begegnen uns zunehmend außerhalb des Büros oder der eigenen vier Wände. Mit gesprochener Sprache steuern Benutzer ihre Mobiltelefone, fragen Datenbanken ab oder führen Geschäftsvorgänge durch. In diesen Bereichen finden eklektisch sprachwissenschaftliche Modelle Anwendung, vor allem solche, die auf linguistischen Ressourcen - wie Wortnetzen oder Ontologien - trainiert werden müssen, aber auch Modelle der Dialog-Repräsentation und -Struktur wie etwa des Turn Taking. Dieser Tagungsband vereint die Beiträge zum Hauptprogramm der Jahrestagung 2005 der Gesellschaftfür Linguistische Datenverarbeitung (GLDV), zu den Workshops GermaNetHund Turn Taking sowie die Beiträge zum GLDV Preis 2005 für die beste Abschlussarbeit.
    Content
    INHALT: Chris Biemann/Rainer Osswald: Automatische Erweiterung eines semantikbasierten Lexikons durch Bootstrapping auf großen Korpora - Ernesto William De Luca/Andreas Nürnberger: Supporting Mobile Web Search by Ontology-based Categorization - Rüdiger Gleim: HyGraph - Ein Framework zur Extraktion, Repräsentation und Analyse webbasierter Hypertextstrukturen - Felicitas Haas/Bernhard Schröder: Freges Grundgesetze der Arithmetik: Dokumentbaum und Formelwald - Ulrich Held/ Andre Blessing/Bettina Säuberlich/Jürgen Sienel/Horst Rößler/Dieter Kopp: A personalized multimodal news service -Jürgen Hermes/Christoph Benden: Fusion von Annotation und Präprozessierung als Vorschlag zur Behebung des Rohtextproblems - Sonja Hüwel/Britta Wrede/Gerhard Sagerer: Semantisches Parsing mit Frames für robuste multimodale Mensch-Maschine-Kommunikation - Brigitte Krenn/Stefan Evert: Separating the wheat from the chaff- Corpus-driven evaluation of statistical association measures for collocation extraction - Jörn Kreutel: An application-centered Perspective an Multimodal Dialogue Systems - Jonas Kuhn: An Architecture for Prallel Corpusbased Grammar Learning - Thomas Mandl/Rene Schneider/Pia Schnetzler/Christa Womser-Hacker: Evaluierung von Systemen für die Eigennamenerkennung im crosslingualen Information Retrieval - Alexander Mehler/Matthias Dehmer/Rüdiger Gleim: Zur Automatischen Klassifikation von Webgenres - Charlotte Merz/Martin Volk: Requirements for a Parallel Treebank Search Tool - Sally YK. Mok: Multilingual Text Retrieval an the Web: The Case of a Cantonese-Dagaare-English Trilingual e-Lexicon -
    Darja Mönke: Ein Parser für natürlichsprachlich formulierte mathematische Beweise - Martin Müller: Ontologien für mathematische Beweistexte - Moritz Neugebauer: The status of functional phonological classification in statistical speech recognition - Uwe Quasthoff: Kookkurrenzanalyse und korpusbasierte Sachgruppenlexikographie - Reinhard Rapp: On the Relationship between Word Frequency and Word Familiarity - Ulrich Schade/Miloslaw Frey/Sebastian Becker: Computerlinguistische Anwendungen zur Verbesserung der Kommunikation zwischen militärischen Einheiten und deren Führungsinformationssystemen - David Schlangen/Thomas Hanneforth/Manfred Stede: Weaving the Semantic Web: Extracting and Representing the Content of Pathology Reports - Thomas Schmidt: Modellbildung und Modellierungsparadigmen in der computergestützten Korpuslinguistik - Sabine Schröder/Martina Ziefle: Semantic transparency of cellular phone menus - Thorsten Trippel/Thierry Declerck/Ulrich Held: Standardisierung von Sprachressourcen: Der aktuelle Stand - Charlotte Wollermann: Evaluation der audiovisuellen Kongruenz bei der multimodalen Sprachsynsthese - Claudia Kunze/Lothar Lemnitzer: Anwendungen des GermaNet II: Einleitung - Claudia Kunze/Lothar Lemnitzer: Die Zukunft der Wortnetze oder die Wortnetze der Zukunft - ein Roadmap-Beitrag -
    Karel Pala: The Balkanet Experience - Peter M. Kruse/Andre Nauloks/Dietmar Rösner/Manuela Kunze: Clever Search: A WordNet Based Wrapper for Internet Search Engines - Rosmary Stegmann/Wolfgang Woerndl: Using GermaNet to Generate Individual Customer Profiles - Ingo Glöckner/Sven Hartrumpf/Rainer Osswald: From GermaNet Glosses to Formal Meaning Postulates -Aljoscha Burchardt/ Katrin Erk/Anette Frank: A WordNet Detour to FrameNet - Daniel Naber: OpenThesaurus: ein offenes deutsches Wortnetz - Anke Holler/Wolfgang Grund/Heinrich Petith: Maschinelle Generierung assoziativer Termnetze für die Dokumentensuche - Stefan Bordag/Hans Friedrich Witschel/Thomas Wittig: Evaluation of Lexical Acquisition Algorithms - Iryna Gurevych/Hendrik Niederlich: Computing Semantic Relatedness of GermaNet Concepts - Roland Hausser: Turn-taking als kognitive Grundmechanik der Datenbanksemantik - Rodolfo Delmonte: Parsing Overlaps - Melanie Twiggs: Behandlung des Passivs im Rahmen der Datenbanksemantik- Sandra Hohmann: Intention und Interaktion - Anmerkungen zur Relevanz der Benutzerabsicht - Doris Helfenbein: Verwendung von Pronomina im Sprecher- und Hörmodus - Bayan Abu Shawar/Eric Atwell: Modelling turn-taking in a corpus-trained chatbot - Barbara März: Die Koordination in der Datenbanksemantik - Jens Edlund/Mattias Heldner/Joakim Gustafsson: Utterance segmentation and turn-taking in spoken dialogue systems - Ekaterina Buyko: Numerische Repräsentation von Textkorpora für Wissensextraktion - Bernhard Fisseni: ProofML - eine Annotationssprache für natürlichsprachliche mathematische Beweise - Iryna Schenk: Auflösung der Pronomen mit Nicht-NP-Antezedenten in spontansprachlichen Dialogen - Stephan Schwiebert: Entwurf eines agentengestützten Systems zur Paradigmenbildung - Ingmar Steiner: On the analysis of speech rhythm through acoustic parameters - Hans Friedrich Witschel: Text, Wörter, Morpheme - Möglichkeiten einer automatischen Terminologie-Extraktion.
    Series
    Sprache, Sprechen und Computer. Bd. 8
  7. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.011467254 = product of:
      0.06880352 = sum of:
        0.0143061085 = weight(_text_:und in 4483) [ClassicSimilarity], result of:
          0.0143061085 = score(doc=4483,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.29385152 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.022334723 = weight(_text_:des in 4483) [ClassicSimilarity], result of:
          0.022334723 = score(doc=4483,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.36716178 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.0143061085 = weight(_text_:und in 4483) [ClassicSimilarity], result of:
          0.0143061085 = score(doc=4483,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.29385152 = fieldWeight in 4483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.017856576 = product of:
          0.03571315 = sum of:
            0.03571315 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.03571315 = score(doc=4483,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.16666667 = coord(4/24)
    
    Date
    15. 3.2000 10:22:37
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.009886693 = product of:
      0.04745613 = sum of:
        0.007320746 = product of:
          0.021962237 = sum of:
            0.021962237 = weight(_text_:p in 156) [ClassicSimilarity], result of:
              0.021962237 = score(doc=156,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.27807623 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
        0.0083452305 = weight(_text_:und in 156) [ClassicSimilarity], result of:
          0.0083452305 = score(doc=156,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.17141339 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.013028587 = weight(_text_:des in 156) [ClassicSimilarity], result of:
          0.013028587 = score(doc=156,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.2141777 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.0083452305 = weight(_text_:und in 156) [ClassicSimilarity], result of:
          0.0083452305 = score(doc=156,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.17141339 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.010416336 = product of:
          0.020832673 = sum of:
            0.020832673 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.020832673 = score(doc=156,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.20833333 = coord(5/24)
    
    Date
    8. 3.2007 19:55:22
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Feldman, S.: Find what I mean, not what I say : meaning-based search tools (2000) 0.01
    0.009094244 = product of:
      0.05456546 = sum of:
        0.011921758 = weight(_text_:und in 4799) [ClassicSimilarity], result of:
          0.011921758 = score(doc=4799,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24487628 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
        0.01861227 = weight(_text_:des in 4799) [ClassicSimilarity], result of:
          0.01861227 = score(doc=4799,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.30596817 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
        0.012109677 = weight(_text_:der in 4799) [ClassicSimilarity], result of:
          0.012109677 = score(doc=4799,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.2467987 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
        0.011921758 = weight(_text_:und in 4799) [ClassicSimilarity], result of:
          0.011921758 = score(doc=4799,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24487628 = fieldWeight in 4799, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=4799)
      0.16666667 = coord(4/24)
    
    Abstract
    Bericht über computerlinguistische Verfahren, die bei verschiedenen Suchdiensten des Internet eingesetzt werden
    Content
    Mit einer Zusammenstellung von Adressen und einer tabellarischen Übersicht der eingesetzten linguistischen Tools
  10. Schmolz, H.: Anaphora resolution and text retrieval : a lnguistic analysis of hypertexts (2013) 0.01
    0.009094244 = product of:
      0.05456546 = sum of:
        0.011921758 = weight(_text_:und in 1810) [ClassicSimilarity], result of:
          0.011921758 = score(doc=1810,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24487628 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.01861227 = weight(_text_:des in 1810) [ClassicSimilarity], result of:
          0.01861227 = score(doc=1810,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.30596817 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.012109677 = weight(_text_:der in 1810) [ClassicSimilarity], result of:
          0.012109677 = score(doc=1810,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.2467987 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
        0.011921758 = weight(_text_:und in 1810) [ClassicSimilarity], result of:
          0.011921758 = score(doc=1810,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.24487628 = fieldWeight in 1810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1810)
      0.16666667 = coord(4/24)
    
    Content
    Trägerin des VFI-Dissertationspreises 2014: "Überzeugende gründliche linguistische und quantitative Analyse eines im Information Retrieval bisher wenig beachteten Textelementes anhand eines eigens erstellten grossen Hypertextkorpus, einschliesslich der Evaluation selbsterstellter Auflösungsregeln für die Nutzung in künftigen IR-Systemen.".
  11. Zhang, X: Rough set theory based automatic text categorization (2005) 0.01
    0.008457381 = product of:
      0.050744288 = sum of:
        0.009537406 = weight(_text_:und in 2822) [ClassicSimilarity], result of:
          0.009537406 = score(doc=2822,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.19590102 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
        0.014889815 = weight(_text_:des in 2822) [ClassicSimilarity], result of:
          0.014889815 = score(doc=2822,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.24477452 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
        0.016779663 = weight(_text_:der in 2822) [ClassicSimilarity], result of:
          0.016779663 = score(doc=2822,freq=6.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.34197432 = fieldWeight in 2822, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
        0.009537406 = weight(_text_:und in 2822) [ClassicSimilarity], result of:
          0.009537406 = score(doc=2822,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.19590102 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
      0.16666667 = coord(4/24)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
  12. Ontologie und Axiomatik der Wissensbasis von LILOG (1992) 0.01
    0.0062918086 = product of:
      0.05033447 = sum of:
        0.016690461 = weight(_text_:und in 3957) [ClassicSimilarity], result of:
          0.016690461 = score(doc=3957,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.34282678 = fieldWeight in 3957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=3957)
        0.016953548 = weight(_text_:der in 3957) [ClassicSimilarity], result of:
          0.016953548 = score(doc=3957,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.34551817 = fieldWeight in 3957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.109375 = fieldNorm(doc=3957)
        0.016690461 = weight(_text_:und in 3957) [ClassicSimilarity], result of:
          0.016690461 = score(doc=3957,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.34282678 = fieldWeight in 3957, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=3957)
      0.125 = coord(3/24)
    
  13. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.01
    0.005740339 = product of:
      0.06888407 = sum of:
        0.061376292 = weight(_text_:1960 in 103) [ClassicSimilarity], result of:
          0.061376292 = score(doc=103,freq=2.0), product of:
            0.15622076 = queryWeight, product of:
              7.11192 = idf(docFreq=97, maxDocs=44218)
              0.021966046 = queryNorm
            0.39288178 = fieldWeight in 103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.11192 = idf(docFreq=97, maxDocs=44218)
              0.0390625 = fieldNorm(doc=103)
        0.0075077773 = product of:
          0.015015555 = sum of:
            0.015015555 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
              0.015015555 = score(doc=103,freq=2.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.19432661 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.5 = coord(1/2)
      0.083333336 = coord(2/24)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  14. Rau, L.F.: Conceptual information extraction and retrieval from natural language input (198) 0.01
    0.005510754 = product of:
      0.04408603 = sum of:
        0.010458209 = product of:
          0.031374626 = sum of:
            0.031374626 = weight(_text_:p in 1955) [ClassicSimilarity], result of:
              0.031374626 = score(doc=1955,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.39725178 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1955)
          0.33333334 = coord(1/3)
        0.01861227 = weight(_text_:des in 1955) [ClassicSimilarity], result of:
          0.01861227 = score(doc=1955,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.30596817 = fieldWeight in 1955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.078125 = fieldNorm(doc=1955)
        0.015015555 = product of:
          0.03003111 = sum of:
            0.03003111 = weight(_text_:29 in 1955) [ClassicSimilarity], result of:
              0.03003111 = score(doc=1955,freq=2.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.38865322 = fieldWeight in 1955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1955)
          0.5 = coord(1/2)
      0.125 = coord(3/24)
    
    Date
    16. 8.1998 13:29:20
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.527-533
    Imprint
    Paris : Centre des Hautes Études Internationales d'Informatique Documentaire
  15. Hutchins, W.J.; Somers, H.L.: ¬An introduction to machine translation (1992) 0.01
    0.005285332 = product of:
      0.042282656 = sum of:
        0.016859911 = weight(_text_:und in 4512) [ClassicSimilarity], result of:
          0.016859911 = score(doc=4512,freq=16.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.34630734 = fieldWeight in 4512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4512)
        0.008562835 = weight(_text_:der in 4512) [ClassicSimilarity], result of:
          0.008562835 = score(doc=4512,freq=4.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.17451303 = fieldWeight in 4512, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4512)
        0.016859911 = weight(_text_:und in 4512) [ClassicSimilarity], result of:
          0.016859911 = score(doc=4512,freq=16.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.34630734 = fieldWeight in 4512, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4512)
      0.125 = coord(3/24)
    
    Classification
    ES 960 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Übersetzung
    RVK
    ES 960 Allgemeine und vergleichende Sprach- und Literaturwissenschaft. Indogermanistik. Außereuropäische Sprachen und Literaturen / Spezialbereiche der allgemeinen Sprachwissenschaft / Datenverarbeitung und Sprachwissenschaft. Computerlinguistik / Maschinelle Übersetzung
  16. Tseng, Y.-H.: Automatic thesaurus generation for Chinese documents (2002) 0.00
    0.004789279 = product of:
      0.028735671 = sum of:
        0.005960879 = weight(_text_:und in 5226) [ClassicSimilarity], result of:
          0.005960879 = score(doc=5226,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.12243814 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
        0.009306135 = weight(_text_:des in 5226) [ClassicSimilarity], result of:
          0.009306135 = score(doc=5226,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.15298408 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
        0.0075077773 = product of:
          0.015015555 = sum of:
            0.015015555 = weight(_text_:29 in 5226) [ClassicSimilarity], result of:
              0.015015555 = score(doc=5226,freq=2.0), product of:
                0.07726968 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021966046 = queryNorm
                0.19432661 = fieldWeight in 5226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.5 = coord(1/2)
        0.005960879 = weight(_text_:und in 5226) [ClassicSimilarity], result of:
          0.005960879 = score(doc=5226,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.12243814 = fieldWeight in 5226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5226)
      0.16666667 = coord(4/24)
    
    Abstract
    Tseng constructs a word co-occurrence based thesaurus by means of the automatic analysis of Chinese text. Words are identified by a longest dictionary match supplemented by a key word extraction algorithm that merges back nearby tokens and accepts shorter strings of characters if they occur more often than the longest string. Single character auxiliary words are a major source of error but this can be greatly reduced with the use of a 70-character 2680 word stop list. Extracted terms with their associate document weights are sorted by decreasing frequency and the top of this list is associated using a Dice coefficient modified to account for longer documents on the weights of term pairs. Co-occurrence is not in the document as a whole but in paragraph or sentence size sections in order to reduce computation time. A window of 29 characters or 11 words was found to be sufficient. A thesaurus was produced from 25,230 Chinese news articles and judges asked to review the top 50 terms associated with each of 30 single word query terms. They determined 69% to be relevant.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  17. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.00
    0.0039902125 = product of:
      0.0319217 = sum of:
        0.013028587 = weight(_text_:des in 5483) [ClassicSimilarity], result of:
          0.013028587 = score(doc=5483,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.2141777 = fieldWeight in 5483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.008476774 = weight(_text_:der in 5483) [ClassicSimilarity], result of:
          0.008476774 = score(doc=5483,freq=2.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.17275909 = fieldWeight in 5483, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.010416336 = product of:
          0.020832673 = sum of:
            0.020832673 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.020832673 = score(doc=5483,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.125 = coord(3/24)
    
    Date
    10.12.2000 18:22:35
    Source
    Informationskompetenz - Basiskompetenz in der Informationsgesellschaft: Proceedings des 7. Internationalen Symposiums für Informationswissenschaft (ISI 2000), Hrsg.: G. Knorz u. R. Kuhlen
  18. Rahmstorf, G.: Information retrieval using conceptual representations of phrases (1994) 0.00
    0.0031841837 = product of:
      0.02547347 = sum of:
        0.0071530542 = weight(_text_:und in 7862) [ClassicSimilarity], result of:
          0.0071530542 = score(doc=7862,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.14692576 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.011167361 = weight(_text_:des in 7862) [ClassicSimilarity], result of:
          0.011167361 = score(doc=7862,freq=2.0), product of:
            0.06083074 = queryWeight, product of:
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.021966046 = queryNorm
            0.18358089 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.7693076 = idf(docFreq=7536, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
        0.0071530542 = weight(_text_:und in 7862) [ClassicSimilarity], result of:
          0.0071530542 = score(doc=7862,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.14692576 = fieldWeight in 7862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=7862)
      0.125 = coord(3/24)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  19. Jaaranen, K.; Lehtola, A.; Tenni, J.; Bounsaythip, C.: Webtran tools for in-company language support (2000) 0.00
    0.0030726888 = product of:
      0.02458151 = sum of:
        0.0071530542 = weight(_text_:und in 5553) [ClassicSimilarity], result of:
          0.0071530542 = score(doc=5553,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.14692576 = fieldWeight in 5553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5553)
        0.010275402 = weight(_text_:der in 5553) [ClassicSimilarity], result of:
          0.010275402 = score(doc=5553,freq=4.0), product of:
            0.049067024 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021966046 = queryNorm
            0.20941564 = fieldWeight in 5553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=5553)
        0.0071530542 = weight(_text_:und in 5553) [ClassicSimilarity], result of:
          0.0071530542 = score(doc=5553,freq=2.0), product of:
            0.04868482 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021966046 = queryNorm
            0.14692576 = fieldWeight in 5553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5553)
      0.125 = coord(3/24)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  20. Sebastiani, F.: ¬A tutorial an automated text categorisation (1999) 0.00
    0.0030688148 = product of:
      0.07365155 = sum of:
        0.07365155 = weight(_text_:1960 in 3390) [ClassicSimilarity], result of:
          0.07365155 = score(doc=3390,freq=2.0), product of:
            0.15622076 = queryWeight, product of:
              7.11192 = idf(docFreq=97, maxDocs=44218)
              0.021966046 = queryNorm
            0.47145814 = fieldWeight in 3390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.11192 = idf(docFreq=97, maxDocs=44218)
              0.046875 = fieldNorm(doc=3390)
      0.041666668 = coord(1/24)
    
    Abstract
    The automated categorisation (or classification) of texts into topical categories has a long history, dating back at least to 1960. Until the late '80s, the dominant approach to the problem involved knowledge-engineering automatic categorisers, i.e. manually building a set of rules encoding expert knowledge an how to classify documents. In the '90s, with the booming production and availability of on-line documents, automated text categorisation has witnessed an increased and renewed interest. A newer paradigm based an machine learning has superseded the previous approach. Within this paradigm, a general inductive process automatically builds a classifier by "learning", from a set of previously classified documents, the characteristics of one or more categories; the advantages are a very good effectiveness, a considerable savings in terms of expert manpower, and domain independence. In this tutorial we look at the main approaches that have been taken towards automatic text categorisation within the general machine learning paradigm. Issues of document indexing, classifier construction, and classifier evaluation, will be touched upon.

Years

Languages

Types

  • a 118
  • el 18
  • m 16
  • s 9
  • p 7
  • x 3
  • More… Less…