Search (141 results, page 1 of 8)

  • × theme_ss:"Computerlinguistik"
  1. Drouin, P.: Term extraction using non-technical corpora as a point of leverage (2003) 0.04
    0.03628321 = product of:
      0.18141605 = sum of:
        0.18141605 = weight(_text_:2003 in 8797) [ClassicSimilarity], result of:
          0.18141605 = score(doc=8797,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            1.2130582 = fieldWeight in 8797, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.125 = fieldNorm(doc=8797)
      0.2 = coord(1/5)
    
    Source
    Terminology. 9(2003) no.1, S.99-115
    Year
    2003
  2. Chung, T.M.: ¬A corpus comparison approach for terminology extraction (2003) 0.04
    0.03628321 = product of:
      0.18141605 = sum of:
        0.18141605 = weight(_text_:2003 in 4072) [ClassicSimilarity], result of:
          0.18141605 = score(doc=4072,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            1.2130582 = fieldWeight in 4072, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.125 = fieldNorm(doc=4072)
      0.2 = coord(1/5)
    
    Source
    Terminology. 9(2003) no.2, S.221-246
    Year
    2003
  3. Bernth, A.; McCord, M.; Warburton, K.: Terminology extraction for global content management (2003) 0.04
    0.03628321 = product of:
      0.18141605 = sum of:
        0.18141605 = weight(_text_:2003 in 4122) [ClassicSimilarity], result of:
          0.18141605 = score(doc=4122,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            1.2130582 = fieldWeight in 4122, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.125 = fieldNorm(doc=4122)
      0.2 = coord(1/5)
    
    Source
    Terminology. 9(2003) no.1, S.51-69
    Year
    2003
  4. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.03
    0.03302392 = product of:
      0.082559794 = sum of:
        0.07477851 = weight(_text_:lexikon in 734) [ClassicSimilarity], result of:
          0.07477851 = score(doc=734,freq=2.0), product of:
            0.21597555 = queryWeight, product of:
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.034459375 = queryNorm
            0.346236 = fieldWeight in 734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.0390625 = fieldNorm(doc=734)
        0.007781283 = product of:
          0.023343848 = sum of:
            0.023343848 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
              0.023343848 = score(doc=734,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.19345059 = fieldWeight in 734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=734)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Wie lernen Kinder sprechen? Welche Hinweise geben gerade ihre Fehler beim Spracherwerb auf den Ablauf des Lernprozesses - getreu dem Motto: "Kinder sagen die töllsten Sachen«? Und wie helfen beziehungsweise warum scheitern bislang Computer bei der Simulation neuronaler Netzwerke, die am komplizierten Gewebe der menschlichen Sprache mitwirken? In seinem neuen Buch Wörter und Regeln hat der bekannte US-amerikanische Kognitionswissenschaftler Steven Pinker (Der Sprachinstinkt) wieder einmal eine ebenso informative wie kurzweifige Erkundungstour ins Reich der Sprache unternommen. Was die Sache besonders spannend und lesenswert macht: Souverän beleuchtet der Professor am Massachusetts Institute of Technology sowohl natur- als auch geisteswissenschaftliche Aspekte. So vermittelt er einerseits linguistische Grundlagen in den Fußspuren Ferdinand de Saussures, etwa die einer generativen Grammatik, liefert einen Exkurs durch die Sprachgeschichte und widmet ein eigenes Kapitel den Schrecken der deutschen Sprache". Andererseits lässt er aber auch die neuesten bildgebenden Verfahren nicht außen vor, die zeigen, was im Gehirn bei der Sprachverarbeitung abläuft. Pinkers Theorie, die sich in diesem Puzzle verschiedenster Aspekte wiederfindet: Sprache besteht im Kein aus zwei Bestandteilen - einem mentalen Lexikon aus erinnerten Wörtern und einer mentalen Grammatik aus verschiedenen kombinatorischen Regeln. Konkret heißt das: Wir prägen uns bekannte Größen und ihre abgestuften, sich kreuzenden Merkmale ein, aber wir erzeugen auch neue geistige Produkte, in dem wir Regeln anwenden. Gerade daraus, so schließt Pinker, erschließt sich der Reichtum und die ungeheure Ausdruckskraft unserer Sprache
    Date
    19. 7.2002 14:22:31
  5. Nakagawa, H.; Mori, T.: Automatic term recognition based an statistics of compound nouns and their components (2003) 0.03
    0.031747807 = product of:
      0.15873903 = sum of:
        0.15873903 = weight(_text_:2003 in 4123) [ClassicSimilarity], result of:
          0.15873903 = score(doc=4123,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            1.0614259 = fieldWeight in 4123, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.109375 = fieldNorm(doc=4123)
      0.2 = coord(1/5)
    
    Source
    Terminology. 9(2003) no.2, S.201-219
    Year
    2003
  6. Egger, W.: Helferlein für jedermann : Elektronische Wörterbücher (2004) 0.03
    0.029911404 = product of:
      0.14955702 = sum of:
        0.14955702 = weight(_text_:lexikon in 1501) [ClassicSimilarity], result of:
          0.14955702 = score(doc=1501,freq=2.0), product of:
            0.21597555 = queryWeight, product of:
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.034459375 = queryNorm
            0.692472 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
      0.2 = coord(1/5)
    
    Series
    Software: Der große Lexikon-Ratgeber
  7. Sprachtechnologie für die multilinguale Kommunikation : Textproduktion, Recherche, Übersetzung, Lokalisierung. Beiträge der GLDV-Frühjahrstagung 2003 (2003) 0.03
    0.02721241 = product of:
      0.13606204 = sum of:
        0.13606204 = weight(_text_:2003 in 4042) [ClassicSimilarity], result of:
          0.13606204 = score(doc=4042,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.9097937 = fieldWeight in 4042, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.09375 = fieldNorm(doc=4042)
      0.2 = coord(1/5)
    
    Year
    2003
  8. Rösener, C.: ¬Die Stecknadel im Heuhaufen : Natürlichsprachlicher Zugang zu Volltextdatenbanken (2005) 0.03
    0.026441736 = product of:
      0.06610434 = sum of:
        0.05982281 = weight(_text_:lexikon in 548) [ClassicSimilarity], result of:
          0.05982281 = score(doc=548,freq=2.0), product of:
            0.21597555 = queryWeight, product of:
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.034459375 = queryNorm
            0.2769888 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.03125 = fieldNorm(doc=548)
        0.0062815323 = product of:
          0.018844597 = sum of:
            0.018844597 = weight(_text_:29 in 548) [ClassicSimilarity], result of:
              0.018844597 = score(doc=548,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.15546128 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Content
    5: Interaktion 5.1 Frage-Antwort- bzw. Dialogsysteme: Forschungen und Projekte 5.2 Darstellung und Visualisierung von Wissen 5.3 Das Dialogsystem im Rahmen des LeWi-Projektes 5.4 Ergebnisdarstellung und Antwortpräsentation im LeWi-Kontext 6: Testumgebungen und -ergebnisse 7: Ergebnisse und Ausblick 7.1 Ausgangssituation 7.2 Schlussfolgerungen 7.3 Ausblick Anhang A Auszüge aus der Grob- bzw. Feinklassifikation des BMM Anhang B MPRO - Formale Beschreibung der wichtigsten Merkmale ... Anhang C Fragentypologie mit Beispielsätzen (Auszug) Anhang D Semantische Merkmale im morphologischen Lexikon (Auszug) Anhang E Regelbeispiele für die Fragentypzuweisung Anhang F Aufstellung der möglichen Suchen im LeWi-Dialogmodul (Auszug) Anhang G Vollständiger Dialogbaum zu Beginn des Projektes Anhang H Statuszustände zur Ermittlung der Folgefragen (Auszug)
    Date
    29. 3.2009 11:11:45
  9. Becker, D.: Automated language processing (1981) 0.02
    0.018340008 = product of:
      0.09170004 = sum of:
        0.09170004 = product of:
          0.2751001 = sum of:
            0.2751001 = weight(_text_:becker in 287) [ClassicSimilarity], result of:
              0.2751001 = score(doc=287,freq=2.0), product of:
                0.23157229 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.034459375 = queryNorm
                1.1879665 = fieldWeight in 287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.125 = fieldNorm(doc=287)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
  10. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.02
    0.018052662 = product of:
      0.045131654 = sum of:
        0.039684758 = weight(_text_:2003 in 1616) [ClassicSimilarity], result of:
          0.039684758 = score(doc=1616,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.26535648 = fieldWeight in 1616, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.005446898 = product of:
          0.016340693 = sum of:
            0.016340693 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.016340693 = score(doc=1616,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.671-682
    Year
    2003
  11. dpa: 14 Forscher mit viel Geld angelockt : Wolfgang-Paul-Preis (2001) 0.02
    0.017946843 = product of:
      0.08973421 = sum of:
        0.08973421 = weight(_text_:lexikon in 6814) [ClassicSimilarity], result of:
          0.08973421 = score(doc=6814,freq=2.0), product of:
            0.21597555 = queryWeight, product of:
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.034459375 = queryNorm
            0.4154832 = fieldWeight in 6814, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.2675414 = idf(docFreq=227, maxDocs=44218)
              0.046875 = fieldNorm(doc=6814)
      0.2 = coord(1/5)
    
    Content
    Darin. "Die Sprachwissenschaftlerin Christiane Fellbaum (dpa-Bild) wird ihr Preisgeld für das an der Berlin-Brandenburgischen Akademie der Wissenschaften zu erstellende "Digitale Wörterbuch der Deutschen Sprache des 20. Jahrhunderts" einsetzen. Sie setzt mit ihrem Computer dort an, wo konventionelle Wörterbücher nicht mehr mithalten können. Sie stellt per Knopfdruck Wortverbindungen her, die eine Sprache so reich an Bildern und Vorstellungen - und damit einzigartig - machen. Ihr elektronisches Lexikon aus über 500 Millionen Wörtern soll später als Datenbank zugänglich sein. Seine Grundlage ist die deutsche Sprache der vergangenen hundert Jahre - ein repräsentativer Querschnitt, zusammengestellt aus Literatur, Zeitungsdeutsch, Fachbuchsprache, Werbetexten und niedergeschriebener Umgangssprache. Wo ein Wörterbuch heute nur ein Wort mit Synonymen oder wenigen Verwendungsmöglichkeiten präsentiert, spannt die Forscherin ein riesiges Netz von Wortverbindungen. Bei Christiane Fellbaums Systematik heißt es beispielsweise nicht nur "verlieren", sondern auch noch "den Faden" oder "die Geduld" verlieren - samt allen möglichen weiteren Kombinationen, die der Computer wie eine Suchmaschine in seinen gespeicherten Texten findet."
  12. Sidhom, S.; Hassoun, M.: Morpho-syntactic parsing to text mining environment : NP recognition model to knowledge visualization and information (2003) 0.02
    0.017565534 = product of:
      0.08782767 = sum of:
        0.08782767 = weight(_text_:2003 in 3546) [ClassicSimilarity], result of:
          0.08782767 = score(doc=3546,freq=3.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.5872693 = fieldWeight in 3546, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.078125 = fieldNorm(doc=3546)
      0.2 = coord(1/5)
    
    Year
    2003
  13. Pepper, S.; Arnaud, P.J.L.: Absolutely PHAB : toward a general model of associative relations (2020) 0.02
    0.017482966 = product of:
      0.043707415 = sum of:
        0.0358555 = weight(_text_:2003 in 103) [ClassicSimilarity], result of:
          0.0358555 = score(doc=103,freq=2.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.2397517 = fieldWeight in 103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.0390625 = fieldNorm(doc=103)
        0.007851916 = product of:
          0.023555748 = sum of:
            0.023555748 = weight(_text_:29 in 103) [ClassicSimilarity], result of:
              0.023555748 = score(doc=103,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.19432661 = fieldWeight in 103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=103)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    There have been many attempts at classifying the semantic modification relations (R) of N + N compounds but this work has not led to the acceptance of a definitive scheme, so that devising a reusable classification is a worthwhile aim. The scope of this undertaking is extended to other binominal lexemes, i.e. units that contain two thing-morphemes without explicitly stating R, like prepositional units, N + relational adjective units, etc. The 25-relation taxonomy of Bourque (2014) was tested against over 15,000 binominal lexemes from 106 languages and extended to a 29-relation scheme ("Bourque2") through the introduction of two new reversible relations. Bourque2 is then mapped onto Hatcher's (1960) four-relation scheme (extended by the addition of a fifth relation, similarity , as "Hatcher2"). This results in a two-tier system usable at different degrees of granularities. On account of its semantic proximity to compounding, metonymy is then taken into account, following Janda's (2011) suggestion that it plays a role in word formation; Peirsman and Geeraerts' (2006) inventory of 23 metonymic patterns is mapped onto Bourque2, confirming the identity of metonymic and binominal modification relations. Finally, Blank's (2003) and Koch's (2001) work on lexical semantics justifies the addition to the scheme of a third, superordinate level which comprises the three Aristotelean principles of similarity, contiguity and contrast.
  14. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.02
    0.016870365 = product of:
      0.04217591 = sum of:
        0.03283837 = product of:
          0.16419186 = sum of:
            0.16419186 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.16419186 = score(doc=562,freq=2.0), product of:
                0.29214695 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.034459375 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.2 = coord(1/5)
        0.009337539 = product of:
          0.028012617 = sum of:
            0.028012617 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.028012617 = score(doc=562,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  15. Langenscheidt und TRADOS binden über drei Millionen Übersetzungen in Terminologie-Datenbanken ein (2003) 0.02
    0.016099079 = product of:
      0.08049539 = sum of:
        0.08049539 = weight(_text_:2003 in 2003) [ClassicSimilarity], result of:
          0.08049539 = score(doc=2003,freq=7.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.5382412 = fieldWeight in 2003, product of:
              2.6457512 = tf(freq=7.0), with freq of:
                7.0 = termFreq=7.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.046875 = fieldNorm(doc=2003)
      0.2 = coord(1/5)
    
    Content
    "Die Langenscheidt KG, München, bietet ab Herbst 2003 über drei Millionen übersetzungen aus ihren Wörterbüchern und Fachwörterbüchern für die Terminologie-Datenbanken "MultiTerm" des weltweit führenden Anbieters von Sprachtechnologie, TRADOS, an. Die Translation-Memory- und Terminologiesoftware von TRADOS hat einen Marktanteil von über achtzig Prozent und wird vor allem von internationalen Firmen und professionellen Übersetzern für die Erarbeitung wertvoller mehrsprachiger Inhalte verwendet. In der neuesten Version der Terminologie-Management-Software "MultiTerm" können nun die Wortbestände von sieben allgemeinsprachlichen und elf Fachwörterbüchern von Langenscheidt eingebunden und somit die Datenbank bei maximaler Ausschöpfung um über drei Millionen Stichworte und Wendungen erweitert werden. Dies erleichtert nicht nur die Terminologiearbeit erheblich, sondem ermöglicht durch die einheitliche Arbeitsoberfläche auch zeitsparendes und komfortables Übersetzen. MultiTerm ist sowohl als Einzelplatzversion wie auch als serverbasierte Netzwerk- oder Onlineversion zu erwerben. Interessenten erhalten unter www.langenscheidt.de/b2b/ebusiness oder www. trados.com/multiterm bzw. www. trados.com/contact weitere Informationen sowie die jeweiligen Ansprechpartner."
    Source
    Information - Wissenschaft und Praxis. 54(2003) H.8, S.452
    Year
    2003
  16. Neumann, H.: Inszenierung und Metabotschaften eines periodisch getakteten Fernsehauftritts : Die Neujahrsansprachen der Bundeskanzler Helmut Kohl und Gerhard Schröder im Vergleich (2003) 0.02
    0.015873903 = product of:
      0.079369515 = sum of:
        0.079369515 = weight(_text_:2003 in 1632) [ClassicSimilarity], result of:
          0.079369515 = score(doc=1632,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.53071296 = fieldWeight in 1632, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1632)
      0.2 = coord(1/5)
    
    Source
    Information - Wissenschaft und Praxis. 54(2003) H.5, S.261-272
    Year
    2003
  17. Toutanova, K.; Klein, D.; Manning, C.D.; Singer, Y.: Feature-rich Part-of-Speech Tagging with a cyclic dependency network (2003) 0.02
    0.015873903 = product of:
      0.079369515 = sum of:
        0.079369515 = weight(_text_:2003 in 1059) [ClassicSimilarity], result of:
          0.079369515 = score(doc=1059,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.53071296 = fieldWeight in 1059, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1059)
      0.2 = coord(1/5)
    
    Source
    Proceedings of HLT-NAACL, 2003
    Year
    2003
  18. Bookstein, A.; Kulyukin, V.; Raita, T.; Nicholson, J.: Adapting measures of clumping strength to assess term-term similarity (2003) 0.01
    0.013606205 = product of:
      0.06803102 = sum of:
        0.06803102 = weight(_text_:2003 in 1609) [ClassicSimilarity], result of:
          0.06803102 = score(doc=1609,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.45489684 = fieldWeight in 1609, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.046875 = fieldNorm(doc=1609)
      0.2 = coord(1/5)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.611-620
    Year
    2003
  19. Karakos, A.: Greeklish : an experimental interface for automatic transliteration (2003) 0.01
    0.013606205 = product of:
      0.06803102 = sum of:
        0.06803102 = weight(_text_:2003 in 1820) [ClassicSimilarity], result of:
          0.06803102 = score(doc=1820,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.45489684 = fieldWeight in 1820, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.046875 = fieldNorm(doc=1820)
      0.2 = coord(1/5)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.11, S.1069-1074
    Year
    2003
  20. Zhou, L.; Zhang, D.: NLPIR: a theoretical framework for applying Natural Language Processing to information retrieval (2003) 0.01
    0.013606205 = product of:
      0.06803102 = sum of:
        0.06803102 = weight(_text_:2003 in 5148) [ClassicSimilarity], result of:
          0.06803102 = score(doc=5148,freq=5.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.45489684 = fieldWeight in 5148, product of:
              2.236068 = tf(freq=5.0), with freq of:
                5.0 = termFreq=5.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
      0.2 = coord(1/5)
    
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.2, S.115-123
    Year
    2003

Years

Languages

  • e 100
  • d 38
  • ru 2
  • m 1
  • More… Less…

Types

  • a 119
  • el 12
  • m 12
  • s 6
  • x 3
  • p 2
  • d 1
  • More… Less…