Search (239 results, page 1 of 12)

  • × theme_ss:"Computerlinguistik"
  • × year_i:[1990 TO 2000}
  1. Wettler, M.; Rapp, R.; Ferber, R.: Freie Assoziationen und Kontiguitäten von Wörtern in Texten (1993) 0.03
    0.03208051 = product of:
      0.06416102 = sum of:
        0.060400415 = weight(_text_:von in 2140) [ClassicSimilarity], result of:
          0.060400415 = score(doc=2140,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.47163114 = fieldWeight in 2140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.125 = fieldNorm(doc=2140)
        0.003760605 = product of:
          0.011281814 = sum of:
            0.011281814 = weight(_text_:a in 2140) [ClassicSimilarity], result of:
              0.011281814 = score(doc=2140,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.20383182 = fieldWeight in 2140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=2140)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Type
    a
  2. Rapp, R.: ¬Die maschinelle Generierung von Wöterbüchern aus zweisprachigen Texten (1994) 0.03
    0.028070446 = product of:
      0.056140892 = sum of:
        0.052850362 = weight(_text_:von in 2141) [ClassicSimilarity], result of:
          0.052850362 = score(doc=2141,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.41267726 = fieldWeight in 2141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.109375 = fieldNorm(doc=2141)
        0.0032905294 = product of:
          0.009871588 = sum of:
            0.009871588 = weight(_text_:a in 2141) [ClassicSimilarity], result of:
              0.009871588 = score(doc=2141,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 2141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2141)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Type
    a
  3. Stock, M.; Stock, W.G.: Literaturnachweis- und Terminologiedatenbank : die Erfassung von Fachliteratur und Fachterminologie eines Fachgebiets in einer kombinierten Datenbank (1991) 0.03
    0.027868655 = product of:
      0.05573731 = sum of:
        0.05338693 = weight(_text_:von in 3411) [ClassicSimilarity], result of:
          0.05338693 = score(doc=3411,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.416867 = fieldWeight in 3411, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=3411)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 3411) [ClassicSimilarity], result of:
              0.007051134 = score(doc=3411,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 3411, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3411)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In wissenschaftlichen Spezialgebieten kann über den Aufbau einer Literaturdatenbank gleichzeitig eine Terminologiedatenbank mit erstellt werden. Als Dokumentationsmethode eignet sich die Textwortmethode mit Übersetzungrelation. Mit dem Softwarepaket LBase aufgebaute Druckbildprogramme gestatten die Ausgabe von Bibliographien und Wörterbüchern
    Type
    a
  4. Rolland, M.T.: Logotechnik als Grundlage einer vollautomatischen Sprachverarbeitung (1995) 0.03
    0.0270943 = product of:
      0.0541886 = sum of:
        0.0523083 = weight(_text_:von in 1313) [ClassicSimilarity], result of:
          0.0523083 = score(doc=1313,freq=6.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.40844458 = fieldWeight in 1313, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=1313)
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 1313) [ClassicSimilarity], result of:
              0.005640907 = score(doc=1313,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 1313, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1313)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Mit Hilfe der Logotechnik, der rein semantikorientierten Methoden der Sprachverarbeitung, ist es möglich, die Sprache in ihren Regeln und Aufbaugesetzmäßigkeiten zu durchschauen und damit einer vollautomatischen Verarbeitung zugänglich zu machen. Semantik meint die geistige Seite der Sprache, die die Syntax impliziert. Im Zentrum der Betrachtungen steht das Wort, sein Inhalt und die von diesem bedingten Sprachstrukturen. Auf der Basis der Erkenntnisse vom Aufbau der Sprache ist die Konzeption eines Dialogsystems, und zwar eines Systems zur Wissensabfrage, dargestellt. Abschließend erfolgen Hinweise auf weitere mögliche Anwendungen, von denen die maschinelle Übersetzung von zentraler Wichtigkeit ist
    Type
    a
  5. Sonnenberger, G.: Automatische Wissensakquisition aus Texten : Textparsing (1990) 0.02
    0.024060382 = product of:
      0.048120763 = sum of:
        0.04530031 = weight(_text_:von in 8428) [ClassicSimilarity], result of:
          0.04530031 = score(doc=8428,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 8428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=8428)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 8428) [ClassicSimilarity], result of:
              0.008461362 = score(doc=8428,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 8428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=8428)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Pragmatische Aspekte beim Entwurf und Betrieb von Informationssystemen: Proc. 1. Int. Symposiums für Informationswissenschaft, Universität Konstanz, 17.-19.10.1990. Hrsg.: J. Herget u. R. Kuhlen
    Type
    a
  6. Zimmermann, H.H.: Wortrelationierung in der Sprachtechnik : Stilhilfen, Retrievalhilfen, Übersetzungshilfen (1992) 0.02
    0.024060382 = product of:
      0.048120763 = sum of:
        0.04530031 = weight(_text_:von in 1372) [ClassicSimilarity], result of:
          0.04530031 = score(doc=1372,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 1372, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=1372)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 1372) [ClassicSimilarity], result of:
              0.008461362 = score(doc=1372,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 1372, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1372)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Kognitive Ansätze zum Ordnen und Darstellen von Wissen. 2. Tagung der Deutschen ISKO Sektion einschl. der Vorträge des Workshops "Thesauri als Werkzeuge der Sprachtechnologie", Weilburg, 15.-18.10.1991
    Type
    a
  7. Mehrle, J.P.: Computer-based system for classifying documents into a hierarchy and linking the classifications to the hierarchy (1998) 0.02
    0.024060382 = product of:
      0.048120763 = sum of:
        0.04530031 = weight(_text_:von in 7143) [ClassicSimilarity], result of:
          0.04530031 = score(doc=7143,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.35372335 = fieldWeight in 7143, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.09375 = fieldNorm(doc=7143)
        0.002820454 = product of:
          0.008461362 = sum of:
            0.008461362 = weight(_text_:a in 7143) [ClassicSimilarity], result of:
              0.008461362 = score(doc=7143,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.15287387 = fieldWeight in 7143, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7143)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Footnote
    Patent von LEXIS-NEXIS im Zusammenhang mit FREESTYLE
  8. Rolland, M.T.: Grammatikstandardisierung im Bereich der Sprachverarbeitung (1996) 0.02
    0.022294924 = product of:
      0.044589847 = sum of:
        0.042709544 = weight(_text_:von in 5356) [ClassicSimilarity], result of:
          0.042709544 = score(doc=5356,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.3334936 = fieldWeight in 5356, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=5356)
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 5356) [ClassicSimilarity], result of:
              0.005640907 = score(doc=5356,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 5356, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5356)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Jede Sprache hat ihre eigene Struktur und weist damit ihre spezifische, von der Semantik her bedingte Grammatik auf. Es wird verdeutlicht, in welcher Weise für eine Sprache, hier für die deutsche Sprache, eine umfassende Grammatik erstellt werden kann. Die prinzipiellen Verfahrensweisen gelten auch für andere Sprachen. Eine solche Grammatik stellt keine Teilgrammatik dar, sondern macht die gesamt in einer Sprache enthaltene Struktur explizit. Daher ist sie im Bereich der Sprachverarbeitung als einheitliche Grundlage in den verschiedensten Sachgebieten anwendbar. Eine Grammatik dieser Art kann insbesondere zum Aufbau von Dialogsystemen und maschinellen Übersetzungssystemen dienen
    Type
    a
  9. Ladewig, C.: 'Information Retrieval ohne Linguistik?' : Erwiderung zu dem Artikel von Gerda Ruge und Sebastian Goeser, Nfd 49(1998) H.6, S.361-369 (1998) 0.02
    0.022294924 = product of:
      0.044589847 = sum of:
        0.042709544 = weight(_text_:von in 2513) [ClassicSimilarity], result of:
          0.042709544 = score(doc=2513,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.3334936 = fieldWeight in 2513, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0625 = fieldNorm(doc=2513)
        0.0018803024 = product of:
          0.005640907 = sum of:
            0.005640907 = weight(_text_:a in 2513) [ClassicSimilarity], result of:
              0.005640907 = score(doc=2513,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.10191591 = fieldWeight in 2513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2513)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Es wird eine Gegendarstellung zu Untersuchungen der Effektivität von Information Retrieval Systemen anhand der Rechercheparameter Precision und Recall gegeben. Grundlage dieser Untersuchungen sind Relevanzbestimmungen oder -einschätzungen, deren Widersprüchlichkeit geklärt wird und es werden Lösungen angeboten
    Type
    a
  10. Xianghao, G.; Yixin, Z.; Li, Y.: ¬A new method of news test understanding and abstracting based on speech acts theory (1998) 0.02
    0.022246402 = product of:
      0.08898561 = sum of:
        0.08898561 = product of:
          0.1334784 = sum of:
            0.012613453 = weight(_text_:a in 3532) [ClassicSimilarity], result of:
              0.012613453 = score(doc=3532,freq=10.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.22789092 = fieldWeight in 3532, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3532)
            0.12086495 = weight(_text_:z in 3532) [ClassicSimilarity], result of:
              0.12086495 = score(doc=3532,freq=2.0), product of:
                0.2562021 = queryWeight, product of:
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.04800207 = queryNorm
                0.47175628 = fieldWeight in 3532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.337313 = idf(docFreq=577, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3532)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    Presents a method for the automated analysis and comprehension of foreign affairs news produced by a Chinese news agency. Notes that the development of the method was prededed by a study of the structuring rules of the news. Describes how an abstract of the news story is produced automatically from the analysis. Stresses the main aim of the work which is to use specch act theory to analyse and classify sentences
    Type
    a
  11. Luckhardt, H.-D.: Klassifikationen und Thesauri für automatische Terminologie-Unterstützung, maschinelle Übersetzung und computergestützte Übersetzung (1992) 0.02
    0.020050319 = product of:
      0.040100638 = sum of:
        0.03775026 = weight(_text_:von in 1371) [ClassicSimilarity], result of:
          0.03775026 = score(doc=1371,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 1371, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=1371)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 1371) [ClassicSimilarity], result of:
              0.007051134 = score(doc=1371,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 1371, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1371)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Source
    Kognitive Ansätze zum Ordnen und Darstellen von Wissen. 2. Tagung der Deutschen ISKO Sektion einschl. der Vorträge des Workshops "Thesauri als Werkzeuge der Sprachtechnologie", Weilburg, 15.-18.10.1991
    Type
    a
  12. gk: Elektronische Dokumentenerschließung : Automatische Übersetzung (1995) 0.02
    0.020050319 = product of:
      0.040100638 = sum of:
        0.03775026 = weight(_text_:von in 1484) [ClassicSimilarity], result of:
          0.03775026 = score(doc=1484,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 1484, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=1484)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 1484) [ClassicSimilarity], result of:
              0.007051134 = score(doc=1484,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1484)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Seit langer Zeit arbeiten Wissenschaftler an einer Automatisierung von Übersetzungen - bislang nur mit bescheidenem Erfolg. Zu teuer - zu starr - zu ungenau, das waren die wesentlichen Hindernisse bei der computergestützten Übersetzung. Mit dem 'PT' (Personal Translator) setzt IBM nach dem 'PC' (Personal Computer) wieder einen Meilenstein
    Type
    a
  13. Dreehsen, B.: ¬Der PC als Dolmetscher (1998) 0.02
    0.020050319 = product of:
      0.040100638 = sum of:
        0.03775026 = weight(_text_:von in 1474) [ClassicSimilarity], result of:
          0.03775026 = score(doc=1474,freq=2.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29476947 = fieldWeight in 1474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.078125 = fieldNorm(doc=1474)
        0.002350378 = product of:
          0.007051134 = sum of:
            0.007051134 = weight(_text_:a in 1474) [ClassicSimilarity], result of:
              0.007051134 = score(doc=1474,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12739488 = fieldWeight in 1474, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1474)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Für englische Web-Seiten und fremdsprachige Korrespondenz ist Übersetzungssoftware hilfreich, die per Mausklick den Text ins Deutsche überträgt und umgekehrt. Die neuen Versionen geben den Inhalt sinngemäß bereits gut wieder. CHIP hat die Leistungen von 5 Programmen getestet
    Type
    a
  14. Volk, M.; Mittermaier, H.; Schurig, A.; Biedassek, T.: Halbautomatische Volltextanalyse, Datenbankaufbau und Document Retrieval (1992) 0.02
    0.019848803 = product of:
      0.039697606 = sum of:
        0.03737085 = weight(_text_:von in 2571) [ClassicSimilarity], result of:
          0.03737085 = score(doc=2571,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29180688 = fieldWeight in 2571, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2571)
        0.0023267558 = product of:
          0.0069802674 = sum of:
            0.0069802674 = weight(_text_:a in 2571) [ClassicSimilarity], result of:
              0.0069802674 = score(doc=2571,freq=4.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.12611452 = fieldWeight in 2571, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2571)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In diesem Aufsatz beschreiben wir ein System zur Analyse von Kurzartikeln. Das System arbeitet halbautomatisch. Das heißt, zunächst wird der Artikel vom System analysiert und dann dem benutzer zur Nachberarbeitung vorgelegt. Die so gewonnene Information wird in einem Datenbankeintrag abgelegt. Über die Datenbank - in dBase IV implementiert - sind dann Abfragen und Zugriffe auf die Originaltexte effizient möglich. Der Kern dieses Aufsatzes betrifft die halbautomatische Analyse. Wir beschreiben unser Verfahren für parametrisiertes Pattern Matching sowie linguistische Heuristiken zur Ermittlung von Nominalphrasen und Präpositionalphrasen. Das System wurde für den praktischen Einsatz im Bonner Büro des 'Forums InformatikerInnen Für Frieden und gesellschaftliche Verantwortung e.V. (FIFF)' entwickelt
    Type
    a
  15. Winiwarter, W.: Bewältigung der Informationsflut : Stand der Computerlinguistik (1996) 0.02
    0.019508056 = product of:
      0.039016113 = sum of:
        0.03737085 = weight(_text_:von in 4099) [ClassicSimilarity], result of:
          0.03737085 = score(doc=4099,freq=4.0), product of:
            0.12806706 = queryWeight, product of:
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.04800207 = queryNorm
            0.29180688 = fieldWeight in 4099, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6679487 = idf(docFreq=8340, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4099)
        0.0016452647 = product of:
          0.004935794 = sum of:
            0.004935794 = weight(_text_:a in 4099) [ClassicSimilarity], result of:
              0.004935794 = score(doc=4099,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.089176424 = fieldWeight in 4099, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4099)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    In vielen Bereichen der Computerlinguistik kann die Situation konstatiert werden, daß eine anfängliche euphorische Aufbruchsstimmung einer resignativen Stagnationsphase gewichen ist. In gleichem Maße wurde damit aber auch Raum für eine realistischere Sicht der Dinge geschaffen, welche von 'Toy Systems' Abschied nimmt und sich praktischen Fragestellungen widmet. Als eines der dringlichsten Probleme ist hier die effiziente Bewältigung der von Tag zu Tag größer werdenden Informationsflut anzusehen. Die vorliegende Arbeit gibt einen aktuellen Überblick über die derzeit zur Verfügung stehenden Techniken. Der Schwerpunkt wird hierbei auf Information Extraction Systeme gelegt, die auf der Grundlage internationaler Evaluierungsprogramme und allgemein verfügbarer linguistischer Ressourcen bereits beachtliche Erfolge erzielen konnten
    Type
    a
  16. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.018024791 = product of:
      0.072099164 = sum of:
        0.072099164 = product of:
          0.108148746 = sum of:
            0.017098093 = weight(_text_:a in 4506) [ClassicSimilarity], result of:
              0.017098093 = score(doc=4506,freq=6.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.3089162 = fieldWeight in 4506, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
            0.091050655 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.091050655 = score(doc=4506,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
    Type
    a
  17. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.016820375 = product of:
      0.0672815 = sum of:
        0.0672815 = product of:
          0.10092224 = sum of:
            0.009871588 = weight(_text_:a in 3164) [ClassicSimilarity], result of:
              0.009871588 = score(doc=3164,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
            0.091050655 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.091050655 = score(doc=3164,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
    Type
    a
  18. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.016820375 = product of:
      0.0672815 = sum of:
        0.0672815 = product of:
          0.10092224 = sum of:
            0.009871588 = weight(_text_:a in 6672) [ClassicSimilarity], result of:
              0.009871588 = score(doc=6672,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
            0.091050655 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.091050655 = score(doc=6672,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    31. 7.1996 9:22:19
    Type
    a
  19. New tools for human translators (1997) 0.02
    0.016820375 = product of:
      0.0672815 = sum of:
        0.0672815 = product of:
          0.10092224 = sum of:
            0.009871588 = weight(_text_:a in 1179) [ClassicSimilarity], result of:
              0.009871588 = score(doc=1179,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
            0.091050655 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.091050655 = score(doc=1179,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Abstract
    A special issue devoted to the theme of new tools for human tranlators
    Date
    31. 7.1996 9:22:19
  20. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.016820375 = product of:
      0.0672815 = sum of:
        0.0672815 = product of:
          0.10092224 = sum of:
            0.009871588 = weight(_text_:a in 3117) [ClassicSimilarity], result of:
              0.009871588 = score(doc=3117,freq=2.0), product of:
                0.055348642 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.04800207 = queryNorm
                0.17835285 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
            0.091050655 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.091050655 = score(doc=3117,freq=2.0), product of:
                0.16809508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04800207 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.6666667 = coord(2/3)
      0.25 = coord(1/4)
    
    Date
    28. 2.1999 10:48:22
    Type
    a

Languages

Types

  • a 202
  • m 19
  • s 16
  • el 5
  • pat 2
  • b 1
  • d 1
  • r 1
  • More… Less…

Classifications