Search (106 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.13
    0.1342984 = product of:
      0.2685968 = sum of:
        0.06311143 = product of:
          0.18933429 = sum of:
            0.18933429 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18933429 = score(doc=562,freq=2.0), product of:
                0.33688295 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03973608 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18933429 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18933429 = score(doc=562,freq=2.0), product of:
            0.33688295 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03973608 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01615107 = product of:
          0.03230214 = sum of:
            0.03230214 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03230214 = score(doc=562,freq=2.0), product of:
                0.13914898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03973608 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.08
    0.08414858 = product of:
      0.25244573 = sum of:
        0.06311143 = product of:
          0.18933429 = sum of:
            0.18933429 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18933429 = score(doc=862,freq=2.0), product of:
                0.33688295 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03973608 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.18933429 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18933429 = score(doc=862,freq=2.0), product of:
            0.33688295 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03973608 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.33333334 = coord(2/6)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.07
    0.068495125 = product of:
      0.20548536 = sum of:
        0.18933429 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18933429 = score(doc=563,freq=2.0), product of:
            0.33688295 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03973608 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.01615107 = product of:
          0.03230214 = sum of:
            0.03230214 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.03230214 = score(doc=563,freq=2.0), product of:
                0.13914898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03973608 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.02
    0.01747574 = product of:
      0.052427217 = sum of:
        0.038967993 = weight(_text_:b in 190) [ClassicSimilarity], result of:
          0.038967993 = score(doc=190,freq=4.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.2767939 = fieldWeight in 190, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=190)
        0.013459226 = product of:
          0.026918452 = sum of:
            0.026918452 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
              0.026918452 = score(doc=190,freq=2.0), product of:
                0.13914898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03973608 = queryNorm
                0.19345059 = fieldWeight in 190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=190)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Classification
    Spr B 68 / Computerlinguistik
    Date
    14. 4.2007 10:04:22
    SBB
    Spr B 68 / Computerlinguistik
  5. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.02
    0.015089246 = product of:
      0.04526774 = sum of:
        0.03584628 = weight(_text_:deutschland in 5759) [ClassicSimilarity], result of:
          0.03584628 = score(doc=5759,freq=2.0), product of:
            0.19192345 = queryWeight, product of:
              4.829954 = idf(docFreq=959, maxDocs=44218)
              0.03973608 = queryNorm
            0.18677385 = fieldWeight in 5759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.829954 = idf(docFreq=959, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5759)
        0.009421458 = product of:
          0.018842915 = sum of:
            0.018842915 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.018842915 = score(doc=5759,freq=2.0), product of:
                0.13914898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03973608 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Um Firmen und Agenturen die Beobachtungen von Medien zu erleichtern, entwickeln Forscher an der Duisburger Hochschule zurzeit ein System zur automatischen Themenerkennung in Rundfunk und Fernsehen. Das so genannte Alert-System soll dem Nutzer helfen, die für ihn relevanten Sprachinformationen aus Nachrichtensendungen herauszufiltem und weiterzuverarbeiten. Durch die automatische Analyse durch den Computer können mehrere Programme rund um die Uhr beobachtet werden. Noch erfolgt die Informationsgewinnung aus TV- und Radiosendungen auf klassischem Wege: Ein Mensch sieht, hört, liest und wertet aus. Das ist enorm zeitaufwendig und für eine Firma, die beispielsweise die Konkurrenz beobachten oder ihre Medienpräsenz dokumentieren lassen möchte, auch sehr teuer. Diese Arbeit ließe sich mit einem Spracherkenner automatisieren, sagten sich die Duisburger Forscher. Sie arbeiten nun zusammen mit Partnern aus Deutschland, Frankreich und Portugal in einem europaweiten Projekt an der Entwicklung einer entsprechenden Technologie (http://alert.uni-duisburg.de). An dem Projekt sind auch zwei Medienbeobachtungsuntemehmen beteiligt, die Oberserver Argus Media GmbH aus Baden-Baden und das französische Unternehmen Secodip. Unsere Arbeit würde schon dadurch erleichtert, wenn Informationen, die über unsere Kunden in den Medien erscheinen, vorselektiert würden", beschreibt Simone Holderbach, Leiterin der Produktentwicklung bei Oberserver, ihr Interesse an der Technik. Und wie funktioniert Alert? Das Spracherkennungssystem wird darauf getrimmt, Nachrichtensendungen in Radio und Fernsehen zu überwachen: Alles, was gesagt wird - sei es vom Nachrichtensprecher, Reporter oder Interviewten -, wird durch die automatische Spracherkennung in Text umgewandelt. Dabei werden Themen und Schlüsselwörter erkannt und gespeichert. Diese werden mit den Suchbegriffen des Nutzers verglichen. Gefundene Übereinstimmungen werden angezeigt und dem Benutzer automatisch mitgeteilt. Konventionelle Spracherkennungstechnik sei für die Medienbeobachtung nicht einsetzbar, da diese für einen anderen Zweck entwickelt worden sei, betont Prof. Gerhard Rigoll, Leiter des Fachgebiets Technische Informatik an der Duisburger Hochschule. Für die Umwandlung von Sprache in Text wurde die Alert-Software gründlich trainiert. Aus Zeitungstexten, Audio- und Video-Material wurden bislang rund 3 50 Millionen Wörter verarbeitet. Das System arbeitet in drei Sprachen. Doch so ganz fehlerfrei sei der automatisch gewonnene Text nicht, räumt Rigoll ein. Zurzeit liegt die Erkennungsrate bei 40 bis 70 Prozent. Und das wird sich in absehbarer Zeit auch nicht ändern." Musiküberlagerungen oder starke Hintergrundgeräusche bei Reportagen führen zu Ungenauigkeiten bei der Textumwandlung. Deshalb haben die, Duisburger Wissenschaftler Methoden entwickelt, die über die herkömmliche Suche nach Schlüsselwörtern hinausgehen und eine inhaltsorientierte Zuordnung ermöglichen. Dadurch erhält der Nutzer dann auch solche Nachrichten, die zwar zum Thema passen, in denen das Stichwort aber gar nicht auftaucht", bringt Rigoll den Vorteil der Technik auf den Punkt. Wird beispielsweise "Ölpreis" als Suchbegriff eingegeben, werden auch solche Nachrichten angezeigt, in denen Olkonzerne und Energieagenturen eine Rolle spielen. Rigoll: Das Alert-System liest sozusagen zwischen den Zeilen!' Das Forschungsprojekt wurde vor einem Jahr gestartet und läuft noch bis Mitte 2002. Wer sich über den Stand der Technik informieren möchte, kann dies in dieser Woche auf der Industriemesse in Hannover. Das Alert-System wird auf dem Gemeinschaftsstand "Forschungsland NRW" in Halle 18, Stand M12, präsentiert
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  6. Endres-Niggemeyer, B.: Sprachverarbeitung im Informationsbereich (1989) 0.01
    0.0146957515 = product of:
      0.08817451 = sum of:
        0.08817451 = weight(_text_:b in 4860) [ClassicSimilarity], result of:
          0.08817451 = score(doc=4860,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.6263131 = fieldWeight in 4860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.125 = fieldNorm(doc=4860)
      0.16666667 = coord(1/6)
    
  7. Natürlichsprachlicher Entwurf von Informationssystemen (1996) 0.01
    0.0146957515 = product of:
      0.08817451 = sum of:
        0.08817451 = weight(_text_:b in 722) [ClassicSimilarity], result of:
          0.08817451 = score(doc=722,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.6263131 = fieldWeight in 722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.125 = fieldNorm(doc=722)
      0.16666667 = coord(1/6)
    
    Editor
    Ortner, E., B. Schienmann u. H. Thoma
  8. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.01
    0.013671255 = product of:
      0.041013762 = sum of:
        0.027554534 = weight(_text_:b in 1171) [ClassicSimilarity], result of:
          0.027554534 = score(doc=1171,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.19572285 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.013459226 = product of:
          0.026918452 = sum of:
            0.026918452 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
              0.026918452 = score(doc=1171,freq=2.0), product of:
                0.13914898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03973608 = queryNorm
                0.19345059 = fieldWeight in 1171, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1171)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    23.11.2023 19:07:22
  9. Caseiro, D.: Automatic language identification bibliography : Last Update: 20 September 1999 (1999) 0.01
    0.012858782 = product of:
      0.07715269 = sum of:
        0.07715269 = weight(_text_:b in 1842) [ClassicSimilarity], result of:
          0.07715269 = score(doc=1842,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.54802394 = fieldWeight in 1842, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.109375 = fieldNorm(doc=1842)
      0.16666667 = coord(1/6)
    
    Type
    b
  10. Campe, P.: Case, semantic roles, and grammatical relations : a comprehensive bibliography (1994) 0.01
    0.011021813 = product of:
      0.06613088 = sum of:
        0.06613088 = weight(_text_:b in 8663) [ClassicSimilarity], result of:
          0.06613088 = score(doc=8663,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.46973482 = fieldWeight in 8663, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.09375 = fieldNorm(doc=8663)
      0.16666667 = coord(1/6)
    
    Type
    b
  11. Vichot, F.; Wolinksi, F.; Tomeh, J.; Guennou, S.; Dillet, B.; Aydjian, S.: High precision hypertext navigation based on NLP automation extractions (1997) 0.01
    0.011021813 = product of:
      0.06613088 = sum of:
        0.06613088 = weight(_text_:b in 733) [ClassicSimilarity], result of:
          0.06613088 = score(doc=733,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.46973482 = fieldWeight in 733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.09375 = fieldNorm(doc=733)
      0.16666667 = coord(1/6)
    
  12. Jones, D.: Analogical natural language processing (1996) 0.01
    0.010391466 = product of:
      0.062348794 = sum of:
        0.062348794 = weight(_text_:b in 4698) [ClassicSimilarity], result of:
          0.062348794 = score(doc=4698,freq=4.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.44287026 = fieldWeight in 4698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0625 = fieldNorm(doc=4698)
      0.16666667 = coord(1/6)
    
    Classification
    Spr B 68 / Computerlinguistik
    SBB
    Spr B 68 / Computerlinguistik
  13. Yang, C.C.; Luk, J.: Automatic generation of English/Chinese thesaurus based on a parallel corpus in laws (2003) 0.01
    0.009569878 = product of:
      0.028709631 = sum of:
        0.019288173 = weight(_text_:b in 1616) [ClassicSimilarity], result of:
          0.019288173 = score(doc=1616,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.13700598 = fieldWeight in 1616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1616)
        0.009421458 = product of:
          0.018842915 = sum of:
            0.018842915 = weight(_text_:22 in 1616) [ClassicSimilarity], result of:
              0.018842915 = score(doc=1616,freq=2.0), product of:
                0.13914898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03973608 = queryNorm
                0.1354154 = fieldWeight in 1616, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1616)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The information available in languages other than English in the World Wide Web is increasing significantly. According to a report from Computer Economics in 1999, 54% of Internet users are English speakers ("English Will Dominate Web for Only Three More Years," Computer Economics, July 9, 1999, http://www.computereconomics. com/new4/pr/pr990610.html). However, it is predicted that there will be only 60% increase in Internet users among English speakers verses a 150% growth among nonEnglish speakers for the next five years. By 2005, 57% of Internet users will be non-English speakers. A report by CNN.com in 2000 showed that the number of Internet users in China had been increased from 8.9 million to 16.9 million from January to June in 2000 ("Report: China Internet users double to 17 million," CNN.com, July, 2000, http://cnn.org/2000/TECH/computing/07/27/ china.internet.reut/index.html). According to Nielsen/ NetRatings, there was a dramatic leap from 22.5 millions to 56.6 millions Internet users from 2001 to 2002. China had become the second largest global at-home Internet population in 2002 (US's Internet population was 166 millions) (Robyn Greenspan, "China Pulls Ahead of Japan," Internet.com, April 22, 2002, http://cyberatias.internet.com/big-picture/geographics/article/0,,5911_1013841,00. html). All of the evidences reveal the importance of crosslingual research to satisfy the needs in the near future. Digital library research has been focusing in structural and semantic interoperability in the past. Searching and retrieving objects across variations in protocols, formats and disciplines are widely explored (Schatz, B., & Chen, H. (1999). Digital libraries: technological advances and social impacts. IEEE Computer, Special Issue an Digital Libraries, February, 32(2), 45-50.; Chen, H., Yen, J., & Yang, C.C. (1999). International activities: development of Asian digital libraries. IEEE Computer, Special Issue an Digital Libraries, 32(2), 48-49.). However, research in crossing language boundaries, especially across European languages and Oriental languages, is still in the initial stage. In this proposal, we put our focus an cross-lingual semantic interoperability by developing automatic generation of a cross-lingual thesaurus based an English/Chinese parallel corpus. When the searchers encounter retrieval problems, Professional librarians usually consult the thesaurus to identify other relevant vocabularies. In the problem of searching across language boundaries, a cross-lingual thesaurus, which is generated by co-occurrence analysis and Hopfield network, can be used to generate additional semantically relevant terms that cannot be obtained from dictionary. In particular, the automatically generated cross-lingual thesaurus is able to capture the unknown words that do not exist in a dictionary, such as names of persons, organizations, and events. Due to Hong Kong's unique history background, both English and Chinese are used as official languages in all legal documents. Therefore, English/Chinese cross-lingual information retrieval is critical for applications in courts and the government. In this paper, we develop an automatic thesaurus by the Hopfield network based an a parallel corpus collected from the Web site of the Department of Justice of the Hong Kong Special Administrative Region (HKSAR) Government. Experiments are conducted to measure the precision and recall of the automatic generated English/Chinese thesaurus. The result Shows that such thesaurus is a promising tool to retrieve relevant terms, especially in the language that is not the same as the input term. The direct translation of the input term can also be retrieved in most of the cases.
  14. Sabourin, C.F. (Bearb.): Computational linguistics in information science : bibliography (1994) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 8280) [ClassicSimilarity], result of:
          0.05510907 = score(doc=8280,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 8280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=8280)
      0.16666667 = coord(1/6)
    
    Type
    b
  15. Dreehsen, B.: ¬Der PC als Dolmetscher (1998) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 1474) [ClassicSimilarity], result of:
          0.05510907 = score(doc=1474,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 1474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=1474)
      0.16666667 = coord(1/6)
    
  16. Sprachtechnologie : ein Überblick (2012) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 1750) [ClassicSimilarity], result of:
          0.05510907 = score(doc=1750,freq=8.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 1750, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1750)
      0.16666667 = coord(1/6)
    
    Abstract
    Seit mehr als einem halben Jahrhundert existieren ernsthafte und ernst zu nehmende Versuche, menschliche Sprache maschinell zu verarbeiten. Maschinelle Übersetzung oder "natürliche" Dialoge mit Computern gehören zu den ersten Ideen, die den Bereich der späteren Computerlinguistik oder Sprachtechnologie abgesteckt und deren Vorhaben geleitet haben. Heute ist dieser auch maschinelle Sprachverarbeitung (natural language processing, NLP) genannte Bereich stark ausdiversifiziert: Durch die rapide Entwicklung der Informatik ist vieles vorher Unvorstellbare Realität (z. B. automatische Telefonauskunft), einiges früher Unmögliche immerhin möglich geworden (z. B. Handhelds mit Sprachein- und -ausgabe als digitale persönliche (Informations-)Assistenten). Es gibt verschiedene Anwendungen der Computerlinguistik, von denen einige den Sprung in die kommerzielle Nutzung geschafft haben (z. B. Diktiersysteme, Textklassifikation, maschinelle Übersetzung). Immer noch wird an natürlichsprachlichen Systemen (natural language systems, NLS) verschiedenster Funktionalität (z. B. zur Beantwortung beliebiger Fragen oder zur Generierung komplexer Texte) intensiv geforscht, auch wenn die hoch gesteckten Ziele von einst längst nicht erreicht sind (und deshalb entsprechend "heruntergefahren" wurden). Wo die maschinelle Sprachverarbeitung heute steht, ist allerdings angesichts der vielfältigen Aktivitäten in der Computerlinguistik und Sprachtechnologie weder offensichtlich noch leicht in Erfahrung zu bringen (für Studierende des Fachs und erst recht für Laien). Ein Ziel dieses Buches ist, es, die aktuelle Literaturlage in dieser Hinsicht zu verbessern, indem spezifisch systembezogene Aspekte der Computerlinguistik als Überblick über die Sprachtechnologie zusammengetragen werden.
  17. Rettinger, A.; Schumilin, A.; Thoma, S.; Ell, B.: Learning a cross-lingual semantic representation of relations expressed in text (2015) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 2027) [ClassicSimilarity], result of:
          0.05510907 = score(doc=2027,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 2027, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=2027)
      0.16666667 = coord(1/6)
    
  18. Hofstadter, D.: Artificial neural networks today are not conscious (2022) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 860) [ClassicSimilarity], result of:
          0.05510907 = score(doc=860,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 860, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=860)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. auch: Agüera y Arcas, B.: Artificial neural networks are making strides towards consciousness..
  19. Agüera y Arcas, B.: Artificial neural networks are making strides towards consciousness (2022) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 861) [ClassicSimilarity], result of:
          0.05510907 = score(doc=861,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=861)
      0.16666667 = coord(1/6)
    
  20. Lutz-Westphal, B.: ChatGPT und der "Faktor Mensch" im schulischen Mathematikunterricht (2023) 0.01
    0.009184845 = product of:
      0.05510907 = sum of:
        0.05510907 = weight(_text_:b in 930) [ClassicSimilarity], result of:
          0.05510907 = score(doc=930,freq=2.0), product of:
            0.14078343 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03973608 = queryNorm
            0.3914457 = fieldWeight in 930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.078125 = fieldNorm(doc=930)
      0.16666667 = coord(1/6)
    

Years

Languages

  • e 70
  • d 31
  • m 5
  • f 2
  • More… Less…

Types

  • a 74
  • el 16
  • m 15
  • s 13
  • b 3
  • x 3
  • p 2
  • d 1
  • More… Less…

Classifications