Search (21 results, page 1 of 2)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10010527 = sum of:
      0.07970713 = product of:
        0.23912138 = sum of:
          0.23912138 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23912138 = score(doc=562,freq=2.0), product of:
              0.42546922 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05018503 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020398136 = product of:
        0.040796272 = sum of:
          0.040796272 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.040796272 = score(doc=562,freq=2.0), product of:
              0.17573942 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05018503 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.07
    0.07145042 = product of:
      0.14290084 = sum of:
        0.14290084 = sum of:
          0.108903944 = weight(_text_:lexikon in 734) [ClassicSimilarity], result of:
            0.108903944 = score(doc=734,freq=2.0), product of:
              0.31453675 = queryWeight, product of:
                6.2675414 = idf(docFreq=227, maxDocs=44218)
                0.05018503 = queryNorm
              0.346236 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.2675414 = idf(docFreq=227, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.033996895 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.033996895 = score(doc=734,freq=2.0), product of:
              0.17573942 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05018503 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.5 = coord(1/2)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
  3. Egger, W.: Helferlein für jedermann : Elektronische Wörterbücher (2004) 0.05
    0.054451972 = product of:
      0.108903944 = sum of:
        0.108903944 = product of:
          0.21780789 = sum of:
            0.21780789 = weight(_text_:lexikon in 1501) [ClassicSimilarity], result of:
              0.21780789 = score(doc=1501,freq=2.0), product of:
                0.31453675 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.05018503 = queryNorm
                0.692472 = fieldWeight in 1501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Software: Der große Lexikon-Ratgeber
  4. dpa: 14 Forscher mit viel Geld angelockt : Wolfgang-Paul-Preis (2001) 0.03
    0.032671183 = product of:
      0.06534237 = sum of:
        0.06534237 = product of:
          0.13068473 = sum of:
            0.13068473 = weight(_text_:lexikon in 6814) [ClassicSimilarity], result of:
              0.13068473 = score(doc=6814,freq=2.0), product of:
                0.31453675 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.05018503 = queryNorm
                0.4154832 = fieldWeight in 6814, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6814)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Darin. "Die Sprachwissenschaftlerin Christiane Fellbaum (dpa-Bild) wird ihr Preisgeld für das an der Berlin-Brandenburgischen Akademie der Wissenschaften zu erstellende "Digitale Wörterbuch der Deutschen Sprache des 20. Jahrhunderts" einsetzen. Sie setzt mit ihrem Computer dort an, wo konventionelle Wörterbücher nicht mehr mithalten können. Sie stellt per Knopfdruck Wortverbindungen her, die eine Sprache so reich an Bildern und Vorstellungen - und damit einzigartig - machen. Ihr elektronisches Lexikon aus über 500 Millionen Wörtern soll später als Datenbank zugänglich sein. Seine Grundlage ist die deutsche Sprache der vergangenen hundert Jahre - ein repräsentativer Querschnitt, zusammengestellt aus Literatur, Zeitungsdeutsch, Fachbuchsprache, Werbetexten und niedergeschriebener Umgangssprache. Wo ein Wörterbuch heute nur ein Wort mit Synonymen oder wenigen Verwendungsmöglichkeiten präsentiert, spannt die Forscherin ein riesiges Netz von Wortverbindungen. Bei Christiane Fellbaums Systematik heißt es beispielsweise nicht nur "verlieren", sondern auch noch "den Faden" oder "die Geduld" verlieren - samt allen möglichen weiteren Kombinationen, die der Computer wie eine Suchmaschine in seinen gespeicherten Texten findet."
  5. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.02
    0.021780789 = product of:
      0.043561578 = sum of:
        0.043561578 = product of:
          0.087123156 = sum of:
            0.087123156 = weight(_text_:lexikon in 548) [ClassicSimilarity], result of:
              0.087123156 = score(doc=548,freq=2.0), product of:
                0.31453675 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.05018503 = queryNorm
                0.2769888 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    5: Interaktion 5.1 Frage-Antwort- bzw. Dialogsysteme: Forschungen und Projekte 5.2 Darstellung und Visualisierung von Wissen 5.3 Das Dialogsystem im Rahmen des LeWi-Projektes 5.4 Ergebnisdarstellung und Antwortpräsentation im LeWi-Kontext 6: Testumgebungen und -ergebnisse 7: Ergebnisse und Ausblick 7.1 Ausgangssituation 7.2 Schlussfolgerungen 7.3 Ausblick Anhang A Auszüge aus der Grob- bzw. Feinklassifikation des BMM Anhang B MPRO - Formale Beschreibung der wichtigsten Merkmale ... Anhang C Fragentypologie mit Beispielsätzen (Auszug) Anhang D Semantische Merkmale im morphologischen Lexikon (Auszug) Anhang E Regelbeispiele für die Fragentypzuweisung Anhang F Aufstellung der möglichen Suchen im LeWi-Dialogmodul (Auszug) Anhang G Vollständiger Dialogbaum zu Beginn des Projektes Anhang H Statuszustände zur Ermittlung der Folgefragen (Auszug)
  6. Kiss, T.: Anmerkungen zur scheinbaren Konkurrenz von numerischen und symbolischen Verfahren in der Computerlinguistik (2002) 0.02
    0.021780789 = product of:
      0.043561578 = sum of:
        0.043561578 = product of:
          0.087123156 = sum of:
            0.087123156 = weight(_text_:lexikon in 1752) [ClassicSimilarity], result of:
              0.087123156 = score(doc=1752,freq=2.0), product of:
                0.31453675 = queryWeight, product of:
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.05018503 = queryNorm
                0.2769888 = fieldWeight in 1752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.2675414 = idf(docFreq=227, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Man kann wohl abschließend festhalten, dass von einer Überlegenheit statistischer Verfahren zumindest im Bereich des Tagging eigentlich nicht gesprochen werden sollte. Darüber hinaus muss die Opposition zwischen regelbasierten und numerischen Verfahren hier aufgeweicht werden, denn auch die statistischen Verfahren verwenden Regelsysteme. Selbst beim Lernen ohne Referenzkorpus ist ja zumindest eine Zuordnung der Wörter zu einem Lexikon bzw. auch eine heuristische Erkennung unbekannter Wörter nach Regeln notwendig. Statistische Verfahren haben - und dies wurde hier wahrscheinlich nicht ausreichend betont - durchaus ihre Berechtigung, sie sind nützlich; sie gestatten, insbesondere im Vergleich zur Introspektion, eine unmittelbarere und breitere Heranführung an das Phänomen Sprache. Die vorhandenen umfangreichen elektronischen Korpora verlangen nahezu danach, Sprache auch mit statistischen Mitteln zu untersuchen. Allerdings können die statistischen Verfahren die regelbasierten Verfahren nicht ersetzen. Somit muss dem Diktum vom "Es geht nicht anders" deutlich widersprochen werden. Dass die statistischen Verfahren zur Zeit so en vogue sind und die regelbasierten Verfahren aussehen lassen wie eine alte Dallas-Folge, mag wohl auch daran liegen, dass zu viele Vertreter des alten Paradigmas nicht die Energie aufbringen, sich dem neuen Paradigma so weit zu öffnen, dass eine kritische Auseinandersetzung mit dem neuen auf der Basis des alten möglich wird. Die Mathematik ist eine geachtete, weil schwierige Wissenschaft, die statistische Sprachverarbeitung ist eine gefürchtete, weil in ihren Eigenschaften oftmals nicht gründlich genug betrachtete Disziplin.
  7. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020398136 = product of:
      0.040796272 = sum of:
        0.040796272 = product of:
          0.081592545 = sum of:
            0.081592545 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.081592545 = score(doc=4888,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  8. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.02
    0.020398136 = product of:
      0.040796272 = sum of:
        0.040796272 = product of:
          0.081592545 = sum of:
            0.081592545 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.081592545 = score(doc=5429,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  9. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.02
    0.016998447 = product of:
      0.033996895 = sum of:
        0.033996895 = product of:
          0.06799379 = sum of:
            0.06799379 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.06799379 = score(doc=5428,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  10. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.012019717 = product of:
      0.024039434 = sum of:
        0.024039434 = product of:
          0.04807887 = sum of:
            0.04807887 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.04807887 = score(doc=2541,freq=4.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  11. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.01
    0.0118989125 = product of:
      0.023797825 = sum of:
        0.023797825 = product of:
          0.04759565 = sum of:
            0.04759565 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.04759565 = score(doc=5483,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10.12.2000 18:22:35
  12. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.0118989125 = product of:
      0.023797825 = sum of:
        0.023797825 = product of:
          0.04759565 = sum of:
            0.04759565 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04759565 = score(doc=156,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22
  13. Paolillo, J.C.: Linguistics and the information sciences (2009) 0.01
    0.0118989125 = product of:
      0.023797825 = sum of:
        0.023797825 = product of:
          0.04759565 = sum of:
            0.04759565 = weight(_text_:22 in 3840) [ClassicSimilarity], result of:
              0.04759565 = score(doc=3840,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.2708308 = fieldWeight in 3840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3840)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    27. 8.2011 14:22:33
  14. Schneider, R.: Web 3.0 ante portas? : Integration von Social Web und Semantic Web (2008) 0.01
    0.0118989125 = product of:
      0.023797825 = sum of:
        0.023797825 = product of:
          0.04759565 = sum of:
            0.04759565 = weight(_text_:22 in 4184) [ClassicSimilarity], result of:
              0.04759565 = score(doc=4184,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.2708308 = fieldWeight in 4184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2011 10:38:28
  15. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.01
    0.010199068 = product of:
      0.020398136 = sum of:
        0.020398136 = product of:
          0.040796272 = sum of:
            0.040796272 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.040796272 = score(doc=4436,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39
  16. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.01
    0.010199068 = product of:
      0.020398136 = sum of:
        0.020398136 = product of:
          0.040796272 = sum of:
            0.040796272 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.040796272 = score(doc=1746,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:17:30
  17. Sienel, J.; Weiss, M.; Laube, M.: Sprachtechnologien für die Informationsgesellschaft des 21. Jahrhunderts (2000) 0.01
    0.008499224 = product of:
      0.016998447 = sum of:
        0.016998447 = product of:
          0.033996895 = sum of:
            0.033996895 = weight(_text_:22 in 5557) [ClassicSimilarity], result of:
              0.033996895 = score(doc=5557,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.19345059 = fieldWeight in 5557, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5557)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2000 13:22:17
  18. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.01
    0.008499224 = product of:
      0.016998447 = sum of:
        0.016998447 = product of:
          0.033996895 = sum of:
            0.033996895 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
              0.033996895 = score(doc=4900,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.19345059 = fieldWeight in 4900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4900)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.01
    0.0059494562 = product of:
      0.0118989125 = sum of:
        0.0118989125 = product of:
          0.023797825 = sum of:
            0.023797825 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.023797825 = score(doc=5759,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  20. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.0059494562 = product of:
      0.0118989125 = sum of:
        0.0118989125 = product of:
          0.023797825 = sum of:
            0.023797825 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.023797825 = score(doc=1616,freq=2.0), product of:
                0.17573942 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05018503 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.