Search (381 results, page 1 of 20)

  • × theme_ss:"Computerlinguistik"
  1. Scobel, G.: GPT: Eine Software, die die Welt verändert (2023) 0.09
    0.08741832 = product of:
      0.24976663 = sum of:
        0.028581016 = weight(_text_:23 in 839) [ClassicSimilarity], result of:
          0.028581016 = score(doc=839,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
        0.028581016 = weight(_text_:23 in 839) [ClassicSimilarity], result of:
          0.028581016 = score(doc=839,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
        0.049522188 = weight(_text_:software in 839) [ClassicSimilarity], result of:
          0.049522188 = score(doc=839,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 839, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
        0.015457011 = weight(_text_:und in 839) [ClassicSimilarity], result of:
          0.015457011 = score(doc=839,freq=4.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34630734 = fieldWeight in 839, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
        0.028581016 = weight(_text_:23 in 839) [ClassicSimilarity], result of:
          0.028581016 = score(doc=839,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 839, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
        0.049522188 = weight(_text_:software in 839) [ClassicSimilarity], result of:
          0.049522188 = score(doc=839,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 839, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
        0.049522188 = weight(_text_:software in 839) [ClassicSimilarity], result of:
          0.049522188 = score(doc=839,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 839, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=839)
      0.35 = coord(7/20)
    
    Abstract
    GPT-3 ist eine jener Entwicklungen, die binnen weniger Monate an Einfluss und Reichweite zulegen. Die Software wird sich massiv auf Ökonomie und Gesellschaft auswirken.
    Date
    23. 1.2023 17:28:51
  2. Leighton, T.: ChatGPT und Künstliche Intelligenz : Utopie oder Dystopie? (2023) 0.08
    0.0753323 = product of:
      0.21523514 = sum of:
        0.028581016 = weight(_text_:23 in 908) [ClassicSimilarity], result of:
          0.028581016 = score(doc=908,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
        0.028581016 = weight(_text_:23 in 908) [ClassicSimilarity], result of:
          0.028581016 = score(doc=908,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
        0.03501747 = weight(_text_:software in 908) [ClassicSimilarity], result of:
          0.03501747 = score(doc=908,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
        0.02443968 = weight(_text_:und in 908) [ClassicSimilarity], result of:
          0.02443968 = score(doc=908,freq=10.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.54756 = fieldWeight in 908, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
        0.028581016 = weight(_text_:23 in 908) [ClassicSimilarity], result of:
          0.028581016 = score(doc=908,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
        0.03501747 = weight(_text_:software in 908) [ClassicSimilarity], result of:
          0.03501747 = score(doc=908,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
        0.03501747 = weight(_text_:software in 908) [ClassicSimilarity], result of:
          0.03501747 = score(doc=908,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 908, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=908)
      0.35 = coord(7/20)
    
    Abstract
    Das Tool wird immer ausgefeilter; es erstellt Software und erfindet die unglaublichsten Fiktionen. Wie "klug" ist es? Wie sieht es mit den Ängsten aus? Und mit Moral?
    Date
    6. 1.2023 21:19:23
    Series
    Telepolis / Kultur und Medien
    Source
    https://www.heise.de/tp/features/ChatGPT-und-Kuenstliche-Intelligenz-Utopie-oder-Dystopie-7445181.html?view=print
  3. Sokirko, A.V.: Programnaya realizatsiya Russkogo abshchesemanticheskogo slovarya (1997) 0.07
    0.07029289 = product of:
      0.23430961 = sum of:
        0.028581016 = weight(_text_:23 in 2258) [ClassicSimilarity], result of:
          0.028581016 = score(doc=2258,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 2258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=2258)
        0.028581016 = weight(_text_:23 in 2258) [ClassicSimilarity], result of:
          0.028581016 = score(doc=2258,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 2258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=2258)
        0.049522188 = weight(_text_:software in 2258) [ClassicSimilarity], result of:
          0.049522188 = score(doc=2258,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 2258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=2258)
        0.028581016 = weight(_text_:23 in 2258) [ClassicSimilarity], result of:
          0.028581016 = score(doc=2258,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 2258, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=2258)
        0.049522188 = weight(_text_:software in 2258) [ClassicSimilarity], result of:
          0.049522188 = score(doc=2258,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 2258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=2258)
        0.049522188 = weight(_text_:software in 2258) [ClassicSimilarity], result of:
          0.049522188 = score(doc=2258,freq=4.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.6198675 = fieldWeight in 2258, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=2258)
      0.3 = coord(6/20)
    
    Abstract
    Discusses the Dolphi2 for Windows software which has been used for the development of the Russian Semantic Dictionay ROSS. Although not a relational database as such, Dolphi actively uses standard objects of relational databases
    Footnote
    Übers. des Titels: Software for the Russian Semantic Dictionary
    Source
    Nauchno- Tekhnicheskaya Informatsiya; Series 2. 1997, no.12, S.20-23
  4. Thiel, M.: Bedingt wahrscheinliche Syntaxbäume (2006) 0.06
    0.064822845 = product of:
      0.14405078 = sum of:
        0.0114324065 = weight(_text_:23 in 6069) [ClassicSimilarity], result of:
          0.0114324065 = score(doc=6069,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.15839456 = fieldWeight in 6069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.0114324065 = weight(_text_:23 in 6069) [ClassicSimilarity], result of:
          0.0114324065 = score(doc=6069,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.15839456 = fieldWeight in 6069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.014006989 = weight(_text_:software in 6069) [ClassicSimilarity], result of:
          0.014006989 = score(doc=6069,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.17532499 = fieldWeight in 6069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.018548414 = weight(_text_:und in 6069) [ClassicSimilarity], result of:
          0.018548414 = score(doc=6069,freq=36.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.41556883 = fieldWeight in 6069, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.0114324065 = weight(_text_:23 in 6069) [ClassicSimilarity], result of:
          0.0114324065 = score(doc=6069,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.15839456 = fieldWeight in 6069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.014006989 = weight(_text_:software in 6069) [ClassicSimilarity], result of:
          0.014006989 = score(doc=6069,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.17532499 = fieldWeight in 6069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.033800744 = weight(_text_:methoden in 6069) [ClassicSimilarity], result of:
          0.033800744 = score(doc=6069,freq=4.0), product of:
            0.10436003 = queryWeight, product of:
              5.1821747 = idf(docFreq=674, maxDocs=44218)
              0.02013827 = queryNorm
            0.32388592 = fieldWeight in 6069, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1821747 = idf(docFreq=674, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.015383439 = weight(_text_:der in 6069) [ClassicSimilarity], result of:
          0.015383439 = score(doc=6069,freq=24.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.34197432 = fieldWeight in 6069, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
        0.014006989 = weight(_text_:software in 6069) [ClassicSimilarity], result of:
          0.014006989 = score(doc=6069,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.17532499 = fieldWeight in 6069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=6069)
      0.45 = coord(9/20)
    
    Abstract
    Es wird argumentiert, dass die Ansätze der probabilistischen kontextfreien Grammatiken und anderer Modelle nicht ausreichend sind für die Lösung der im Parsing auftretenden Probleme. Zu deren Lösung wird folgende Hypothese aufgestellt: Die Wahrscheinlichkeiten einzelner Lesarten und verschiedener Knoten in einem Syntaxbaum sind voneinander abhängig. Vereindeutigt man eine Lesart bzw. einen Knoten, hat dies Auswirkungen auf die Wahrscheinlichkeit anderer Lesarten bzw. Knoten. Daher werden alle Lesarten und Syntaxbäume in einen Graphen integriert. Wenn die Wahrscheinlichkeiten also voneinander abhängig sind, wird angenommen, dass die Theorie der bedingten Wahrscheinlichkeiten von Bayes als Basis eine Lösung produzieren sollte. An einem Beispiel wird dies nachvollzogen und die Hypothese konnte bestätigt werden.
    Die Tendenz ist eindeutig: wo immer es sinnvoll ist, werden hart' programmierte Lösungen durch Ansätze des Softcomputing ersetzt. Vor allem technische und kommerzielle Bereiche profitieren davon. So finden wir Kransteuerungen und viele andere Anwendungen mit Fuzzy Expertensystemen sowie Bilderkennungssysteme und Entscheidungen über die Kreditvergabe mit Neuronalen Netzen oder auch Methoden des Maschinellen Lernens (vgl. Jafar-Shaghaghi 1994). Ein Prinzip dieser Ansätze ist, dass die Software sich automatisch an die spezielle Situation und Datengrundlage der Anwendung anpasst. Flexibilität der Anpassung und die Fähigkeit zur Verallgemeinerung auf bislang ungesehene Fälle sind implizit in den Methoden vorhanden. Gerade dies ist auch ein typisches Problem, das bei der Beschreibung und vor allem beim Parsen natürlicher Sprache auftritt. Bei der Verarbeitung natürlicher Sprache kommt das leidige Problem der Ambiguität auf verschiedenen Ebenen hinzu. Alternative Regeln schließen sich in ihrer Anwendung in einem Satz meistens gegenseitig aus und sind nicht alle an der aktuellen Stelle gleich wahrscheinlich. Auf diese Problematik wurde schon früh hingewiesen (Thiel 1987, 137 ff.), wo versucht wurde, mit Gewichtungen die Wahrscheinlichkeit von Regeln, Syntaxbäumen, Kategorien und Wortsemantik in den Griff zu bekommen. Das Gewicht eines Syntaxbaumes kann z.B. einfach zugewiesen werden oder berechnet werden als Funktion des Baumes, aus dem er abgeleitet wird, und der angewandten Regel. Ein solches Verfahren wird (Thiel 1987, 152) am Beispiel einer Heuristik für die Inferenzmaschine eines Expertensystems gezeigt. Aber auch bereits in einer sehr frühen Veröffentlichung zur Analyse natürlicher Sprache, an der Zimmermann maßgeblich beteiligt war, wurde auf Vorkommenswahrscheinlichkeiten hingewiesen: "Statistische Auswertung von Typen des Satzbaus, Bau nominaler und verbaler Gruppen ..." (Eggers et al. 1969, 18). Derzeit konzentrieren sich die Ansätze von Vagheit in der Verarbeitung von natürlicher Sprache vor allem auf die Filterung von Texten z.B. in Spam-Filtern und auf probabilistische kontextfreie Grammatiken.
    Date
    13.10.2006 9:35:23
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  5. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.04
    0.036485195 = product of:
      0.12161731 = sum of:
        0.028013978 = weight(_text_:software in 1490) [ClassicSimilarity], result of:
          0.028013978 = score(doc=1490,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.35064998 = fieldWeight in 1490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1490)
        0.021417862 = weight(_text_:und in 1490) [ClassicSimilarity], result of:
          0.021417862 = score(doc=1490,freq=12.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.47985753 = fieldWeight in 1490, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1490)
        0.028013978 = weight(_text_:software in 1490) [ClassicSimilarity], result of:
          0.028013978 = score(doc=1490,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.35064998 = fieldWeight in 1490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1490)
        0.008881632 = weight(_text_:der in 1490) [ClassicSimilarity], result of:
          0.008881632 = score(doc=1490,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.19743896 = fieldWeight in 1490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=1490)
        0.028013978 = weight(_text_:software in 1490) [ClassicSimilarity], result of:
          0.028013978 = score(doc=1490,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.35064998 = fieldWeight in 1490, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=1490)
        0.007275887 = product of:
          0.02182766 = sum of:
            0.02182766 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.02182766 = score(doc=1490,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.33333334 = coord(1/3)
      0.3 = coord(6/20)
    
    Abstract
    Morphy ist ein frei verfügbares Softwarepaket für die morphologische Analyse und Synthese und die kontextsensitive Wortartenbestimmung des Deutschen. Die Verwendung der Software unterliegt keinen Beschränkungen. Da die Weiterentwicklung eingestellt worden ist, verwenden Sie Morphy as is, d.h. auf eigenes Risiko, ohne jegliche Haftung und Gewährleistung und vor allem ohne Support. Morphy ist nur für die Windows-Plattform verfügbar und nur auf Standalone-PCs lauffähig.
    Date
    22. 3.2015 9:30:24
  6. Helbig, H.; Gnörlich, C.; Leveling, J.: Natürlichsprachlicher Zugang zu Informationsanbietern im Internet und zu lokalen Datenbanken (2000) 0.04
    0.03516976 = product of:
      0.11723253 = sum of:
        0.014290508 = weight(_text_:23 in 5558) [ClassicSimilarity], result of:
          0.014290508 = score(doc=5558,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.1979932 = fieldWeight in 5558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
        0.014290508 = weight(_text_:23 in 5558) [ClassicSimilarity], result of:
          0.014290508 = score(doc=5558,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.1979932 = fieldWeight in 5558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
        0.015457011 = weight(_text_:und in 5558) [ClassicSimilarity], result of:
          0.015457011 = score(doc=5558,freq=16.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34630734 = fieldWeight in 5558, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
        0.014290508 = weight(_text_:23 in 5558) [ClassicSimilarity], result of:
          0.014290508 = score(doc=5558,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.1979932 = fieldWeight in 5558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
        0.04225093 = weight(_text_:methoden in 5558) [ClassicSimilarity], result of:
          0.04225093 = score(doc=5558,freq=4.0), product of:
            0.10436003 = queryWeight, product of:
              5.1821747 = idf(docFreq=674, maxDocs=44218)
              0.02013827 = queryNorm
            0.4048574 = fieldWeight in 5558, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1821747 = idf(docFreq=674, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
        0.01665306 = weight(_text_:der in 5558) [ClassicSimilarity], result of:
          0.01665306 = score(doc=5558,freq=18.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.37019804 = fieldWeight in 5558, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5558)
      0.3 = coord(6/20)
    
    Abstract
    Die Schaffung eines natürlichsprachlichen Interfaces (NLI), (das einem Nutzer die Formulierung von Anfragen an Informationsanbieter in seiner Muttersprache erlaubt, stellt eine der interessantesten Herausforderungen im Bereich des Information-Retrieval und der Verarbeitung natürlicher Sprache dar. Dieser Beitrag beschreibt Methoden zur Obersetzung natürlichsprachlicher Anfragen in Ausdrücke formaler Retrievalsprachen sowohl für Informationsressourcen im Internet als auch für lokale Datenbanken. Die vorgestellten Methoden sind Teil das Informationsrecherchesystems LINAS, das an der Fernuniversität Hagen entwickelt wurde, um Nutzern einen natürlichsprachlichen Zugang zu lokalen und zu im Internet verteilten wissenschaftlichen und technischen Informationen anzubieten. Das LINAS-System unterscheidet sich von anderen Systemen und natürlichsprachlichen Interfaces (vgl. OSIRIS) oder die früheren Systeme INTELLECT, Q&A durch die explizite Einbeziehung von Hintergrundwissen und speziellen Dialogmodellen in den Übersetzungsprozeß. Darüber hinaus ist das System auf ein vollständiges Verstehen des natürlichsprachlichen Textes ausgerichtet, während andere Systeme typischerweise nur nach Stichworten oder bestimmten grammatikalischen Mustern in der Eingabe suchen. Ein besonderer Schwerpunkt von LINAS liegt in der Repräsentation und Auswertung der semantischen Relationen zwischen den in der Nutzeranfrage gegebenen Konzepten
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  7. Rolland, M.T.: ¬Ein semantikorientierter Ansatz im Bereich der Sprachverarbeitung (1995) 0.03
    0.03477063 = product of:
      0.13908252 = sum of:
        0.034297217 = weight(_text_:23 in 4445) [ClassicSimilarity], result of:
          0.034297217 = score(doc=4445,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.47518367 = fieldWeight in 4445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=4445)
        0.034297217 = weight(_text_:23 in 4445) [ClassicSimilarity], result of:
          0.034297217 = score(doc=4445,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.47518367 = fieldWeight in 4445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=4445)
        0.013115709 = weight(_text_:und in 4445) [ClassicSimilarity], result of:
          0.013115709 = score(doc=4445,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.29385152 = fieldWeight in 4445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=4445)
        0.034297217 = weight(_text_:23 in 4445) [ClassicSimilarity], result of:
          0.034297217 = score(doc=4445,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.47518367 = fieldWeight in 4445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.09375 = fieldNorm(doc=4445)
        0.023075162 = weight(_text_:der in 4445) [ClassicSimilarity], result of:
          0.023075162 = score(doc=4445,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.5129615 = fieldWeight in 4445, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=4445)
      0.25 = coord(5/20)
    
    Date
    2.10.1996 18:23:47
    Series
    Sprache und Computer; 15
    Source
    Angewandte Computerlinguistik: Vorträge im Rahmen der Jahrestagung 1995 der Gesellschaft für Linguistische Datenverarbeitung (GLDV) e.V., Regensburg, 30.-31.3.1995
  8. Winograd, T.: Software für Sprachverarbeitung (1984) 0.03
    0.03454656 = product of:
      0.13818625 = sum of:
        0.03501747 = weight(_text_:software in 1687) [ClassicSimilarity], result of:
          0.03501747 = score(doc=1687,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 1687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=1687)
        0.010929758 = weight(_text_:und in 1687) [ClassicSimilarity], result of:
          0.010929758 = score(doc=1687,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.24487628 = fieldWeight in 1687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1687)
        0.03501747 = weight(_text_:software in 1687) [ClassicSimilarity], result of:
          0.03501747 = score(doc=1687,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 1687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=1687)
        0.02220408 = weight(_text_:der in 1687) [ClassicSimilarity], result of:
          0.02220408 = score(doc=1687,freq=8.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.4935974 = fieldWeight in 1687, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=1687)
        0.03501747 = weight(_text_:software in 1687) [ClassicSimilarity], result of:
          0.03501747 = score(doc=1687,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 1687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=1687)
      0.25 = coord(5/20)
    
    Abstract
    Der Computer kann mit sprachlichen Zeichen sicher und schnell umgehen. Dies zeigen Programme zur Textverarbeitung. Versuche allerdings, ihn auch mit Bedeutungen operieren zu lassen, sind gescheitert. Wird der Rechner das größte Problem der Sprachverarbeitung - die Mehrdeutigkeit natürlicher Sprachen - jemals bewältigen?
    Source
    Spektrum der Wissenschaft. 1984, H.11, S.88-102
  9. Lonsdale, D.; Mitamura, T.; Nyberg, E.: Acquisition of large lexicons for practical knowledge-based MT (1994/95) 0.03
    0.034343183 = product of:
      0.11447728 = sum of:
        0.017148608 = weight(_text_:23 in 7409) [ClassicSimilarity], result of:
          0.017148608 = score(doc=7409,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
        0.017148608 = weight(_text_:23 in 7409) [ClassicSimilarity], result of:
          0.017148608 = score(doc=7409,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
        0.021010485 = weight(_text_:software in 7409) [ClassicSimilarity], result of:
          0.021010485 = score(doc=7409,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
        0.017148608 = weight(_text_:23 in 7409) [ClassicSimilarity], result of:
          0.017148608 = score(doc=7409,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
        0.021010485 = weight(_text_:software in 7409) [ClassicSimilarity], result of:
          0.021010485 = score(doc=7409,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
        0.021010485 = weight(_text_:software in 7409) [ClassicSimilarity], result of:
          0.021010485 = score(doc=7409,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.2629875 = fieldWeight in 7409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=7409)
      0.3 = coord(6/20)
    
    Abstract
    Although knowledge based MT systems have the potential to achieve high translation accuracy, each successful application system requires a large amount of hand coded lexical knowledge. Systems like KBMT-89 and its descendants have demonstarted how knowledge based translation can produce good results in technical domains with tractable domain semantics. Nevertheless, the magnitude of the development task for large scale applications with 10s of 1000s of of domain concepts precludes a purely hand crafted approach. The current challenge for the next generation of knowledge based MT systems is to utilize online textual resources and corpus analysis software in order to automate the most laborious aspects of the knowledge acquisition process. This partial automation can in turn maximize the productivity of human knowledge engineers and help to make large scale applications of knowledge based MT an viable approach. Discusses the corpus based knowledge acquisition methodology used in KANT, a knowledge based translation system for multilingual document production. This methodology can be generalized beyond the KANT interlinhua approach for use with any system that requires similar kinds of knowledge
    Date
    18. 5.1996 16:23:54
  10. Egger, W.: Helferlein für jedermann : Elektronische Wörterbücher (2004) 0.03
    0.033802867 = product of:
      0.13521147 = sum of:
        0.03501747 = weight(_text_:software in 1501) [ClassicSimilarity], result of:
          0.03501747 = score(doc=1501,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
        0.010929758 = weight(_text_:und in 1501) [ClassicSimilarity], result of:
          0.010929758 = score(doc=1501,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.24487628 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
        0.03501747 = weight(_text_:software in 1501) [ClassicSimilarity], result of:
          0.03501747 = score(doc=1501,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
        0.019229298 = weight(_text_:der in 1501) [ClassicSimilarity], result of:
          0.019229298 = score(doc=1501,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.42746788 = fieldWeight in 1501, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
        0.03501747 = weight(_text_:software in 1501) [ClassicSimilarity], result of:
          0.03501747 = score(doc=1501,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.43831247 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
      0.25 = coord(5/20)
    
    Abstract
    Zahllose online-dictionaries und einzelne, teilweise ausgezeichnete elektronische Wörterbücher wollen hier nicht erwähnt werden, da ihre Vorzüge teilweise folgenden Nachteilen gegenüber stehen: Internet-Verbindung, CD-Rom, bzw. zeitaufwändiges Aufrufen der Wörterbücher oder Wechsel der Sprachrichtung sind erforderlich.
    Series
    Software: Der große Lexikon-Ratgeber
  11. McCune, B.P.; Tong, R.M.; Dean, J.S.: Rubric: a system for rule-based information retrieval (1985) 0.03
    0.032919236 = product of:
      0.16459617 = sum of:
        0.04202097 = weight(_text_:software in 1945) [ClassicSimilarity], result of:
          0.04202097 = score(doc=1945,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.525975 = fieldWeight in 1945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.09375 = fieldNorm(doc=1945)
        0.04202097 = weight(_text_:software in 1945) [ClassicSimilarity], result of:
          0.04202097 = score(doc=1945,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.525975 = fieldWeight in 1945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.09375 = fieldNorm(doc=1945)
        0.04202097 = weight(_text_:software in 1945) [ClassicSimilarity], result of:
          0.04202097 = score(doc=1945,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.525975 = fieldWeight in 1945, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.09375 = fieldNorm(doc=1945)
        0.038533263 = product of:
          0.077066526 = sum of:
            0.077066526 = weight(_text_:engineering in 1945) [ClassicSimilarity], result of:
              0.077066526 = score(doc=1945,freq=2.0), product of:
                0.10819342 = queryWeight, product of:
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.02013827 = queryNorm
                0.7123033 = fieldWeight in 1945, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.372528 = idf(docFreq=557, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1945)
          0.5 = coord(1/2)
      0.2 = coord(4/20)
    
    Source
    IEEE transactions on software engineering. 11(1985), S.939-945
  12. Latzer, F.-M.: Yo Computa! (1997) 0.03
    0.03247501 = product of:
      0.16237505 = sum of:
        0.04902446 = weight(_text_:software in 6005) [ClassicSimilarity], result of:
          0.04902446 = score(doc=6005,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.61363745 = fieldWeight in 6005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.109375 = fieldNorm(doc=6005)
        0.015301661 = weight(_text_:und in 6005) [ClassicSimilarity], result of:
          0.015301661 = score(doc=6005,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34282678 = fieldWeight in 6005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=6005)
        0.04902446 = weight(_text_:software in 6005) [ClassicSimilarity], result of:
          0.04902446 = score(doc=6005,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.61363745 = fieldWeight in 6005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.109375 = fieldNorm(doc=6005)
        0.04902446 = weight(_text_:software in 6005) [ClassicSimilarity], result of:
          0.04902446 = score(doc=6005,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.61363745 = fieldWeight in 6005, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.109375 = fieldNorm(doc=6005)
      0.2 = coord(4/20)
    
    Abstract
    Leistungsfähige und preisgünstige PC-Software macht Sprachverarbeitung nun auch als Standardanwendung für jeden interessant
  13. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.03
    0.031635623 = product of:
      0.1581781 = sum of:
        0.12082612 = weight(_text_:230 in 5429) [ClassicSimilarity], result of:
          0.12082612 = score(doc=5429,freq=2.0), product of:
            0.13547163 = queryWeight, product of:
              6.727074 = idf(docFreq=143, maxDocs=44218)
              0.02013827 = queryNorm
            0.89189243 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.727074 = idf(docFreq=143, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.013115709 = weight(_text_:und in 5429) [ClassicSimilarity], result of:
          0.013115709 = score(doc=5429,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.29385152 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.013322448 = weight(_text_:der in 5429) [ClassicSimilarity], result of:
          0.013322448 = score(doc=5429,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.29615843 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.01091383 = product of:
          0.03274149 = sum of:
            0.03274149 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.03274149 = score(doc=5429,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.33333334 = coord(1/3)
      0.2 = coord(4/20)
    
    Abstract
    Noch immer ist der menschliche Übersetzer dem Computer in sprachlicher Hinsicht überlegen. Zwar ist die Übersetzungssoftware besser geworden, aber die systembedingten Probleme bleiben
    Source
    c't. 2000, H.22, S.230-231
  14. Endres-Niggemeyer, B.: Sprachverarbeitung im Informationsbereich (1989) 0.03
    0.030935297 = product of:
      0.15467648 = sum of:
        0.045729626 = weight(_text_:23 in 4860) [ClassicSimilarity], result of:
          0.045729626 = score(doc=4860,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.63357824 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
        0.045729626 = weight(_text_:23 in 4860) [ClassicSimilarity], result of:
          0.045729626 = score(doc=4860,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.63357824 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
        0.017487612 = weight(_text_:und in 4860) [ClassicSimilarity], result of:
          0.017487612 = score(doc=4860,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.39180204 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
        0.045729626 = weight(_text_:23 in 4860) [ClassicSimilarity], result of:
          0.045729626 = score(doc=4860,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.63357824 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
      0.2 = coord(4/20)
    
    Pages
    S.9-23
    Source
    Linguistische Datenverarbeitung und Neue Medien. Hrsg.: Winfried Lenders
  15. Barthel, J.; Ciesielski, R.: Regeln zu ChatGPT an Unis oft unklar : KI in der Bildung (2023) 0.03
    0.030172179 = product of:
      0.120688714 = sum of:
        0.028581016 = weight(_text_:23 in 925) [ClassicSimilarity], result of:
          0.028581016 = score(doc=925,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=925)
        0.028581016 = weight(_text_:23 in 925) [ClassicSimilarity], result of:
          0.028581016 = score(doc=925,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=925)
        0.028581016 = weight(_text_:23 in 925) [ClassicSimilarity], result of:
          0.028581016 = score(doc=925,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=925)
        0.01110204 = weight(_text_:der in 925) [ClassicSimilarity], result of:
          0.01110204 = score(doc=925,freq=2.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.2467987 = fieldWeight in 925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=925)
        0.023843624 = product of:
          0.047687247 = sum of:
            0.047687247 = weight(_text_:29 in 925) [ClassicSimilarity], result of:
              0.047687247 = score(doc=925,freq=6.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.6731671 = fieldWeight in 925, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=925)
          0.5 = coord(1/2)
      0.25 = coord(5/20)
    
    Date
    29. 3.2023 13:23:26
    29. 3.2023 13:29:19
  16. Klein, A.; Weis, U.; Stede, M.: ¬Der Einsatz von Sprachverarbeitungstools beim Sprachenlernen im Intranet (2000) 0.03
    0.028975526 = product of:
      0.1159021 = sum of:
        0.028581016 = weight(_text_:23 in 5542) [ClassicSimilarity], result of:
          0.028581016 = score(doc=5542,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 5542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
        0.028581016 = weight(_text_:23 in 5542) [ClassicSimilarity], result of:
          0.028581016 = score(doc=5542,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 5542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
        0.010929758 = weight(_text_:und in 5542) [ClassicSimilarity], result of:
          0.010929758 = score(doc=5542,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.24487628 = fieldWeight in 5542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
        0.028581016 = weight(_text_:23 in 5542) [ClassicSimilarity], result of:
          0.028581016 = score(doc=5542,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.3959864 = fieldWeight in 5542, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
        0.019229298 = weight(_text_:der in 5542) [ClassicSimilarity], result of:
          0.019229298 = score(doc=5542,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.42746788 = fieldWeight in 5542, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.078125 = fieldNorm(doc=5542)
      0.25 = coord(5/20)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  17. Melzer, C.: ¬Der Maschine anpassen : PC-Spracherkennung - Programme sind mittlerweile alltagsreif (2005) 0.03
    0.02858446 = product of:
      0.09528153 = sum of:
        0.021228215 = weight(_text_:software in 4044) [ClassicSimilarity], result of:
          0.021228215 = score(doc=4044,freq=6.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.26571283 = fieldWeight in 4044, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4044)
        0.011476245 = weight(_text_:und in 4044) [ClassicSimilarity], result of:
          0.011476245 = score(doc=4044,freq=18.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.25712007 = fieldWeight in 4044, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4044)
        0.021228215 = weight(_text_:software in 4044) [ClassicSimilarity], result of:
          0.021228215 = score(doc=4044,freq=6.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.26571283 = fieldWeight in 4044, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4044)
        0.016937435 = weight(_text_:der in 4044) [ClassicSimilarity], result of:
          0.016937435 = score(doc=4044,freq=38.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.37651968 = fieldWeight in 4044, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4044)
        0.021228215 = weight(_text_:software in 4044) [ClassicSimilarity], result of:
          0.021228215 = score(doc=4044,freq=6.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.26571283 = fieldWeight in 4044, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4044)
        0.0031832005 = product of:
          0.009549601 = sum of:
            0.009549601 = weight(_text_:22 in 4044) [ClassicSimilarity], result of:
              0.009549601 = score(doc=4044,freq=2.0), product of:
                0.07052079 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02013827 = queryNorm
                0.1354154 = fieldWeight in 4044, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4044)
          0.33333334 = coord(1/3)
      0.3 = coord(6/20)
    
    Content
    "Der Spracherkennung am Computer schien vor wenigen Jahren die Zukunft zu gehören. Geradezu euphorisch waren viele Computernutzer, als sich auf den Bildschirmen die ersten gesprochenen Sätze als Text darstellten. Doch die Spracherkennung erwies sich als anfällig, die Nachbearbeitung nahm manchmal mehr Zeit in Anspruch als gespart wurde. Dabei ist die Kommunikation des Menschen mit der Maschine über die Tastatur eigentlich höchst kompliziert - selbst geübte Schreiber sprechen schneller als sie tippen. Deshalb hat sich inzwischen viel getan: Im Preis und in der Genauigkeit sind viele Spracherkennungsprogramme heute alltagsreif. Die besten Systeme kosten aber noch immer mehrere hundert Euro, die günstigsten weisen Lücken auf. Letztlich gilt: Respektable Ergebnisse sind erreichbar, wenn sich der Mensch der Maschine anpasst. Die Stiftung Warentest in Berlin hat die sechs gängigsten Systeme auf den Prüfstand gestellt. Die ersten Ergebnisse waren ernüchternd: Das deutlich gesprochene "Johann Wolfgang von Goethe" wurde als "Juan Wolf kann Mohnblüte", "Jaun Wolfgang von Göbel" oder "Johann-Wolfgang Wohngüte" geschrieben. Grundsätzlich gilt: Bei einem einfachen Basiswortschatz sind die Ergebnisse genau, sobald es etwas spezieller wird, wird die Software erfinderisch. "Zweiter Weltkrieg" kann dann zu "Zeit für Geld kriegt" werden. Doch ebenso wie der Nutzer lernt auch das System. Bei der Software ist Lernfähigkeit Standard. Ohnehin muss der Benutzer das System einrichten, indem er vorgegebene Texte liest. Dabei wird das Programm der Stimme und der Sprechgeschwindigkeit angepasst. Hier gilt, dass der Anwender deutlich, aber ganz normal vorlesen sollte. Wer akzentuiert und übertrieben betont, wird später mit ungenauen Ausgaben bestraft. Erkennt das System auch nach dem Training einzelne Wörter nicht, können sie nachträglich eingefügt werden. Gleiches gilt für kompliziertere Orts- oder Eigennamen. Wie gut das funktioniert, beweist ein Gegentest: Liest ein anderer den selben Text vor, sinkt das Erkennungsniveau rapide. Die beste Lernfähigkeit attestierten die Warentester dem System "Voice Pro 10" von linguatec. Das war das mit Abstand vielseitigste, mit fast 200 Euro jedoch auch das teuerste Programm.
    Billiger geht es mit "Via Voice Standard" von IBM. Die Software kostet etwa 50 Euro, hat aber erhebliche Schwächen in der Lernfähigkeit: Sie schneidet jedoch immer noch besser ab als das gut drei Mal so teure "Voice Office Premium 10"; das im Test der sechs Programme als einziges nur ein "Befriedigend" bekam. "Man liest über Spracherkennung nicht mehr so viel" weil es funktioniert", glaubt Dorothee Wiegand von der in Hannover erscheinenden Computerzeitschrift "c't". Die Technik" etwa "Dragon Naturally Speaking" von ScanSoft, sei ausgereift, "Spracherkennung ist vor allem Statistik, die Auswertung unendlicher Wortmöglichkeiten. Eigentlich war eher die Hardware das Problem", sagt Wiegand. Da jetzt selbst einfache Heimcomputer schnell und leistungsfähig seien, hätten die Entwickler viel mehr Möglichkeiten."Aber selbst ältere Computer kommen mit den Systemen klar. Sie brauchen nur etwas länger! "Jedes Byte macht die Spracherkennung etwas schneller, ungenauer ist sie sonst aber nicht", bestätigt Kristina Henry von linguatec in München. Auch für die Produkte des Herstellers gelte jedoch, dass "üben und deutlich sprechen wichtiger sind als jede Hardware". Selbst Stimmen von Diktiergeräten würden klar, erkannt, versichert Henry: "Wir wollen einen Schritt weiter gehen und das Diktieren von unterwegs möglich machen." Der Benutzer könnte dann eine Nummer anwählen, etwa im Auto einen Text aufsprechen und ihn zu Hause "getippt" vorfinden. Grundsätzlich passt die Spracherkennungssoftware inzwischen auch auf den privaten Computer. Klar ist aber, dass selbst der bestgesprochene Text nachbearbeitet werden muss. Zudem ist vom Nutzer Geduld gefragt: Ebenso wie sein System lernt, muss der Mensch sich in Aussprache und Geschwindigkeit dem System anpassen. Dann sind die Ergebnisse allerdings beachtlich - und "Sexterminvereinbarung" statt "zwecks Terminvereinbarung" gehört der Vergangenheit an."
    Date
    3. 5.1997 8:44:22
  18. Wahlster, W.: Verbmobil : Erkennung, Analyse, Transfer, Generierung und Synthese von Spontansprache (2001) 0.03
    0.027472911 = product of:
      0.09157637 = sum of:
        0.017148608 = weight(_text_:23 in 5629) [ClassicSimilarity], result of:
          0.017148608 = score(doc=5629,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 5629, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=5629)
        0.017148608 = weight(_text_:23 in 5629) [ClassicSimilarity], result of:
          0.017148608 = score(doc=5629,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 5629, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=5629)
        0.018548414 = weight(_text_:und in 5629) [ClassicSimilarity], result of:
          0.018548414 = score(doc=5629,freq=16.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.41556883 = fieldWeight in 5629, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=5629)
        0.017148608 = weight(_text_:23 in 5629) [ClassicSimilarity], result of:
          0.017148608 = score(doc=5629,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.23759183 = fieldWeight in 5629, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.046875 = fieldNorm(doc=5629)
        0.013322448 = weight(_text_:der in 5629) [ClassicSimilarity], result of:
          0.013322448 = score(doc=5629,freq=8.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.29615843 = fieldWeight in 5629, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.046875 = fieldNorm(doc=5629)
        0.008259674 = product of:
          0.016519347 = sum of:
            0.016519347 = weight(_text_:29 in 5629) [ClassicSimilarity], result of:
              0.016519347 = score(doc=5629,freq=2.0), product of:
                0.070840135 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.02013827 = queryNorm
                0.23319192 = fieldWeight in 5629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5629)
          0.5 = coord(1/2)
      0.3 = coord(6/20)
    
    Abstract
    Verbmobil ist ein langfristig angelegtes, interdisziplinäres Leitprojekt im Bereich der Sprachtechnologie. Das Verbmobil-System erkennt gesprochene Spontansprache, analysiert die Eingabe, übersetzt sie in eine Fremdsprache, erzeugt einen Satz und spricht ihn aus. Für ausgewählte Themenbereiche (z.B. Terminverhandlung, Reiseplanung, Fernwartung) soll Verbmobil Übersetzungshilfe in Gesprächssituationen mit ausländischen Partnern leisten. Das Verbundvorhaben, in dem Unternehmen der Informationstechnologie, Universitäten und Forschungszentren kooperieren, wird vom Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF) in zwei Phasen (Laufzeit Phase 1: 1993-1996; Phase 2: 1997 - 2000) gefördert. Nachdem in der ersten Phase Terminverhandlungsdialoge zwischen einem deutschen und japanischen Geschäftspartner mit Englisch als Zwischensprache verarbeitet wurden, steht in der zweiten Phase von Verbmobil die robuste und bidirektionale Übersetzung spontansprachlicher Dialoge aus den Domänen Reiseplanung und Hotelreservierung für die Sprachpaare Deutsch-Englisch (ca. 10. 000 Wörter) und Deutsch-Japanisch (ca. 2.500 Wörter) im Vordergrund
    Date
    29. 1.1997 18:49:05
    Source
    Wechselwirkung. 23(2001) Nr.108, S.26-31
  19. Dietze, J.: Texterschließung : lexikalische Semantik und Wissensrepräsentation (1994) 0.03
    0.027068386 = product of:
      0.13534193 = sum of:
        0.04001342 = weight(_text_:23 in 2822) [ClassicSimilarity], result of:
          0.04001342 = score(doc=2822,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.55438095 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=2822)
        0.04001342 = weight(_text_:23 in 2822) [ClassicSimilarity], result of:
          0.04001342 = score(doc=2822,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.55438095 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=2822)
        0.015301661 = weight(_text_:und in 2822) [ClassicSimilarity], result of:
          0.015301661 = score(doc=2822,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.34282678 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.109375 = fieldNorm(doc=2822)
        0.04001342 = weight(_text_:23 in 2822) [ClassicSimilarity], result of:
          0.04001342 = score(doc=2822,freq=2.0), product of:
            0.07217676 = queryWeight, product of:
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.02013827 = queryNorm
            0.55438095 = fieldWeight in 2822, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5840597 = idf(docFreq=3336, maxDocs=44218)
              0.109375 = fieldNorm(doc=2822)
      0.2 = coord(4/20)
    
    Footnote
    Rez.in: Knowledge organization 23(1996) no.2, S.116 (E. Mater)
  20. Helbig, H.: ¬Die semantische Struktur natürlicher Sprache : Wissensrepräsentation mit MultiNet (2001) 0.03
    0.027042292 = product of:
      0.10816917 = sum of:
        0.028013978 = weight(_text_:software in 7072) [ClassicSimilarity], result of:
          0.028013978 = score(doc=7072,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.35064998 = fieldWeight in 7072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=7072)
        0.008743806 = weight(_text_:und in 7072) [ClassicSimilarity], result of:
          0.008743806 = score(doc=7072,freq=2.0), product of:
            0.044633795 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.02013827 = queryNorm
            0.19590102 = fieldWeight in 7072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=7072)
        0.028013978 = weight(_text_:software in 7072) [ClassicSimilarity], result of:
          0.028013978 = score(doc=7072,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.35064998 = fieldWeight in 7072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=7072)
        0.015383439 = weight(_text_:der in 7072) [ClassicSimilarity], result of:
          0.015383439 = score(doc=7072,freq=6.0), product of:
            0.044984195 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.02013827 = queryNorm
            0.34197432 = fieldWeight in 7072, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0625 = fieldNorm(doc=7072)
        0.028013978 = weight(_text_:software in 7072) [ClassicSimilarity], result of:
          0.028013978 = score(doc=7072,freq=2.0), product of:
            0.07989157 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.02013827 = queryNorm
            0.35064998 = fieldWeight in 7072, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.0625 = fieldNorm(doc=7072)
      0.25 = coord(5/20)
    
    Abstract
    Die Methodik der 'Mehrschichtigen Erweiterung Semantischer Netze' (MultiNet) ist sowohl für theoretische Untersuchungen als auch für die automatische Verarbeitung natürlicher Sprache auf dem Rechner geeignet. Die vorgestellten Ergebnisse sind eingebettet in ein System von Software-Werkzeugen, die eine praktische Nutzung der MultiNet-Darstellungsmittel als Formalismus zur Bedeutungsrepräsentation sichern
    Footnote
    2. Aufl. 2008 u.d.T.: Wissensverarbeitung und die Semantik der natürlichen Sprache

Years

Languages

  • d 193
  • e 178
  • ru 4
  • m 3
  • chi 2
  • f 1
  • More… Less…

Types

  • a 278
  • m 62
  • el 43
  • s 22
  • x 12
  • p 2
  • b 1
  • d 1
  • More… Less…

Subjects

Classifications