Search (109 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.16
    0.16370177 = product of:
      0.21826902 = sum of:
        0.040865026 = product of:
          0.1634601 = sum of:
            0.1634601 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1634601 = score(doc=562,freq=2.0), product of:
                0.29084495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0343058 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.1634601 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1634601 = score(doc=562,freq=2.0), product of:
            0.29084495 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0343058 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.013943886 = product of:
          0.027887773 = sum of:
            0.027887773 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.027887773 = score(doc=562,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.10
    0.10216257 = product of:
      0.20432514 = sum of:
        0.040865026 = product of:
          0.1634601 = sum of:
            0.1634601 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.1634601 = score(doc=862,freq=2.0), product of:
                0.29084495 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0343058 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
        0.1634601 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.1634601 = score(doc=862,freq=2.0), product of:
            0.29084495 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0343058 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(2/4)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.08870199 = product of:
      0.17740399 = sum of:
        0.1634601 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.1634601 = score(doc=563,freq=2.0), product of:
            0.29084495 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0343058 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.013943886 = product of:
          0.027887773 = sum of:
            0.027887773 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.027887773 = score(doc=563,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.02
    0.020685503 = product of:
      0.08274201 = sum of:
        0.08274201 = sum of:
          0.0595022 = weight(_text_:lernen in 734) [ClassicSimilarity], result of:
            0.0595022 = score(doc=734,freq=2.0), product of:
              0.19222628 = queryWeight, product of:
                5.6033173 = idf(docFreq=442, maxDocs=44218)
                0.0343058 = queryNorm
              0.30954248 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6033173 = idf(docFreq=442, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.023239812 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.023239812 = score(doc=734,freq=2.0), product of:
              0.120133065 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0343058 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.25 = coord(1/4)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
  5. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.02
    0.016851919 = product of:
      0.033703838 = sum of:
        0.0151119875 = product of:
          0.06044795 = sum of:
            0.06044795 = weight(_text_:learning in 6753) [ClassicSimilarity], result of:
              0.06044795 = score(doc=6753,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.3946431 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.25 = coord(1/4)
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.0371837 = score(doc=6753,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
  6. Morris, V.: Automated language identification of bibliographic resources (2020) 0.02
    0.016851919 = product of:
      0.033703838 = sum of:
        0.0151119875 = product of:
          0.06044795 = sum of:
            0.06044795 = weight(_text_:learning in 5749) [ClassicSimilarity], result of:
              0.06044795 = score(doc=5749,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.3946431 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.25 = coord(1/4)
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.0371837 = score(doc=5749,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  7. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.02
    0.015678436 = product of:
      0.03135687 = sum of:
        0.0075559937 = product of:
          0.030223975 = sum of:
            0.030223975 = weight(_text_:learning in 5218) [ClassicSimilarity], result of:
              0.030223975 = score(doc=5218,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19732155 = fieldWeight in 5218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5218)
          0.25 = coord(1/4)
        0.023800878 = product of:
          0.047601756 = sum of:
            0.047601756 = weight(_text_:lernen in 5218) [ClassicSimilarity], result of:
              0.047601756 = score(doc=5218,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.24763398 = fieldWeight in 5218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5218)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
    Series
    IT lernen
  8. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.01
    0.014479851 = product of:
      0.057919405 = sum of:
        0.057919405 = sum of:
          0.04165154 = weight(_text_:lernen in 5759) [ClassicSimilarity], result of:
            0.04165154 = score(doc=5759,freq=2.0), product of:
              0.19222628 = queryWeight, product of:
                5.6033173 = idf(docFreq=442, maxDocs=44218)
                0.0343058 = queryNorm
              0.21667974 = fieldWeight in 5759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.6033173 = idf(docFreq=442, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5759)
          0.016267868 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
            0.016267868 = score(doc=5759,freq=2.0), product of:
              0.120133065 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0343058 = queryNorm
              0.1354154 = fieldWeight in 5759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5759)
      0.25 = coord(1/4)
    
    Abstract
    Computer müssen lernen, die Sprache des Menschen zu verstehen. Forscher an der Uni Duisburg haben eine Methode entwickelt, mit der ein Rechner Informationen aus Radiobeiträgen herausfiltern kann.
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  9. Maschinelle Sprachsynthese (1996) 0.01
    0.010412885 = product of:
      0.04165154 = sum of:
        0.04165154 = product of:
          0.08330308 = sum of:
            0.08330308 = weight(_text_:lernen in 5872) [ClassicSimilarity], result of:
              0.08330308 = score(doc=5872,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.43335947 = fieldWeight in 5872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5872)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Kinder lernen es schon auf der Grundschule, aber für einen Computer ist es nach wie vor etwas vom SChwersten: Vorlesen. Es ist nicht nur die Produktion von Lauten durch die menschlichen Sprechorgane ein äußerst komplexer Vorgang, der vereinfachender Modellierung nur schwer zugänglich ist; um einen Satz richtig auszusprechen, muß man auch von seinem Sinn einen erheblichen Teil erfaßt haben. Gleichwohl macht die maschinelle Nachbildung dieses Prozesses Fortschritte, von einer linguistischen Vorverarbeitung des geschriebenen Textes bis hin zu einem marktfähigen Produkt: dem sprechenden Autoradio
  10. Warner, A.J.: Natural language processing (1987) 0.01
    0.009295925 = product of:
      0.0371837 = sum of:
        0.0371837 = product of:
          0.0743674 = sum of:
            0.0743674 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.0743674 = score(doc=337,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  11. Geißler, S.: Maschinelles Lernen und NLP : Reif für die industrielle Anwendung! (2019) 0.01
    0.008925329 = product of:
      0.035701316 = sum of:
        0.035701316 = product of:
          0.07140263 = sum of:
            0.07140263 = weight(_text_:lernen in 3547) [ClassicSimilarity], result of:
              0.07140263 = score(doc=3547,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.37145096 = fieldWeight in 3547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3547)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  12. Kuo, J.-S.; Li, H.; Yang, Y.-K.: Active learning for constructing transliteration lexicons from the Web (2008) 0.01
    0.008746186 = product of:
      0.034984745 = sum of:
        0.034984745 = product of:
          0.13993898 = sum of:
            0.13993898 = weight(_text_:learning in 1345) [ClassicSimilarity], result of:
              0.13993898 = score(doc=1345,freq=14.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.91361165 = fieldWeight in 1345, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1345)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This article presents an adaptive learning framework for Phonetic Similarity Modeling (PSM) that supports the automatic construction of transliteration lexicons. The learning algorithm starts with minimum prior knowledge about machine transliteration and acquires knowledge iteratively from the Web. We study the unsupervised learning and the active learning strategies that minimize human supervision in terms of data labeling. The learning process refines the PSM and constructs a transliteration lexicon at the same time. We evaluate the proposed PSM and its learning algorithm through a series of systematic experiments, which show that the proposed framework is reliably effective on two independent databases.
  13. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.06507147 = score(doc=3164,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  14. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.06507147 = score(doc=4506,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    8.10.2000 11:52:22
  15. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.06507147 = score(doc=6672,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  16. New tools for human translators (1997) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.06507147 = score(doc=1179,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
  17. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.06507147 = score(doc=3117,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    28. 2.1999 10:48:22
  18. ¬Der Student aus dem Computer (2023) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.06507147 = score(doc=1079,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 1.2023 16:22:55
  19. Witschel, H.F.: Text, Wörter, Morpheme : Möglichkeiten einer automatischen Terminologie-Extraktion (2004) 0.01
    0.007437775 = product of:
      0.0297511 = sum of:
        0.0297511 = product of:
          0.0595022 = sum of:
            0.0595022 = weight(_text_:lernen in 126) [ClassicSimilarity], result of:
              0.0595022 = score(doc=126,freq=2.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30954248 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die vorliegende Arbeit beschäftigt sich mit einem Teilgebiet des TextMining, versucht also Information (in diesem Fall Fachterminologie) aus natürlichsprachlichem Text zu extrahieren. Die der Arbeit zugrundeliegende These besagt, daß in vielen Gebieten des Text Mining die Kombination verschiedener Methoden sinnvoll sein kann, um dem Facettenreichtum natürlicher Sprache gerecht zu werden. Die bei der Terminologie-Extraktion angewandten Methoden sind statistischer und linguistischer (bzw. musterbasierter) Natur. Um sie herzuleiten, wurden einige Eigenschaften von Fachtermini herausgearbeitet, die für deren Extraktion relevant sind. So läßt sich z.B. die Tatsache, daß viele Fachbegriffe Nominalphrasen einer bestimmten Form sind, direkt für eine Suche nach gewissen POS-Mustern ausnützen, die Verteilung von Termen in Fachtexten führte zu einem statistischen Ansatz - der Differenzanalyse. Zusammen mit einigen weiteren wurden diese Ansätze in ein Verfahren integriert, welches in der Lage ist, aus dem Feedback eines Anwenders zu lernen und in mehreren Schritten die Suche nach Terminologie zu verfeinern. Dabei wurden mehrere Parameter des Verfahrens veränderlich belassen, d.h. der Anwender kann sie beliebig anpassen. Bei der Untersuchung der Ergebnisse anhand von zwei Fachtexten aus unterschiedlichen Domänen wurde deutlich, daß sich zwar die verschiedenen Verfahren gut ergänzen, daß aber die optimalen Werte der veränderbaren Parameter, ja selbst die Auswahl der angewendeten Verfahren text- und domänenabhängig sind.
  20. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.007372714 = product of:
      0.014745428 = sum of:
        0.0066114943 = product of:
          0.026445977 = sum of:
            0.026445977 = weight(_text_:learning in 1616) [ClassicSimilarity], result of:
              0.026445977 = score(doc=1616,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.17265636 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.25 = coord(1/4)
        0.008133934 = product of:
          0.016267868 = sum of:
            0.016267868 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.016267868 = score(doc=1616,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Footnote
    Teil eines Themenheftes: "Web retrieval and mining: A machine learning perspective"

Years

Languages

  • e 80
  • d 30
  • m 1
  • More… Less…

Types

  • a 80
  • el 19
  • m 11
  • s 8
  • x 4
  • p 3
  • d 1
  • More… Less…

Classifications