Search (113 results, page 1 of 6)

  • × theme_ss:"Computerlinguistik"
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.24
    0.23988402 = product of:
      0.47976804 = sum of:
        0.06154665 = product of:
          0.18463995 = sum of:
            0.18463995 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.18463995 = score(doc=862,freq=2.0), product of:
                0.32853028 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03875087 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.048941493 = weight(_text_:work in 862) [ClassicSimilarity], result of:
          0.048941493 = score(doc=862,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.18463995 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18463995 = score(doc=862,freq=2.0), product of:
            0.32853028 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03875087 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
        0.18463995 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.18463995 = score(doc=862,freq=2.0), product of:
            0.32853028 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03875087 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.5 = coord(4/8)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.22
    0.22328858 = product of:
      0.44657716 = sum of:
        0.06154665 = product of:
          0.18463995 = sum of:
            0.18463995 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18463995 = score(doc=562,freq=2.0), product of:
                0.32853028 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03875087 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18463995 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18463995 = score(doc=562,freq=2.0), product of:
            0.32853028 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03875087 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.18463995 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18463995 = score(doc=562,freq=2.0), product of:
            0.32853028 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03875087 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015750622 = product of:
          0.031501245 = sum of:
            0.031501245 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.031501245 = score(doc=562,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.14
    0.14438644 = product of:
      0.3850305 = sum of:
        0.18463995 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18463995 = score(doc=563,freq=2.0), product of:
            0.32853028 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03875087 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.18463995 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.18463995 = score(doc=563,freq=2.0), product of:
            0.32853028 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03875087 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.015750622 = product of:
          0.031501245 = sum of:
            0.031501245 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.031501245 = score(doc=563,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Hodgson, J.P.E.: Knowledge representation and language in AI (1991) 0.10
    0.09879459 = product of:
      0.26345223 = sum of:
        0.10618362 = weight(_text_:supported in 1529) [ClassicSimilarity], result of:
          0.10618362 = score(doc=1529,freq=4.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.4626825 = fieldWeight in 1529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1529)
        0.10731792 = weight(_text_:cooperative in 1529) [ClassicSimilarity], result of:
          0.10731792 = score(doc=1529,freq=4.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.4651472 = fieldWeight in 1529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1529)
        0.049950704 = weight(_text_:work in 1529) [ClassicSimilarity], result of:
          0.049950704 = score(doc=1529,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.35119468 = fieldWeight in 1529, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1529)
      0.375 = coord(3/8)
    
    Abstract
    The aim of this book is to highlight the relationship between knowledge representation and language in artificial intelligence, and in particular on the way in which the choice of representation influences the language used to discuss a problem - and vice versa. Opening with a discussion of knowledge representation methods, and following this with a look at reasoning methods, the author begins to make his case for the intimate relationship between language and representation. He shows how each representation method fits particularly well with some reasoning methods and less so with others, using specific languages as examples. The question of representation change, an important and complex issue about which very little is known, is addressed. Dr Hodgson gathers together recent work on problem solving, showing how, in some cases, it has been possible to use representation changes to recast problems into a language that makes them easier to solve. The author maintains throughout that the relationships that this book explores lie at the heart of the construction of large systems, examining a number of the current large AI systems from the viewpoint of representation and language to prove his point.
    Classification
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
    RVK
    ST 285 Informatik / Monographien / Software und -entwicklung / Computer supported cooperative work (CSCW), Groupware
  5. Kay, M.: ¬The proper place of men and machines in language translation (1997) 0.03
    0.03115376 = product of:
      0.12461504 = sum of:
        0.10623932 = weight(_text_:cooperative in 1178) [ClassicSimilarity], result of:
          0.10623932 = score(doc=1178,freq=2.0), product of:
            0.23071818 = queryWeight, product of:
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.03875087 = queryNorm
            0.46047226 = fieldWeight in 1178, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.953884 = idf(docFreq=311, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.018375728 = product of:
          0.036751457 = sum of:
            0.036751457 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.036751457 = score(doc=1178,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.2708308 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Machine translation stands no chance of filling actual needs for translation because, although there has been progress in relevant areas of computer science, advance in linguistics have not touched the core problems. Cooperative man-machine systems need to be developed, Proposes a translator's amanuensis, incorporating into a word processor some simple facilities peculiar to translation. Gradual enhancements of such a system could lead to the original goal of machine translation
    Date
    31. 7.1996 9:22:19
  6. Meyer, J.: ¬Die Verwendung hierarchisch strukturierter Sprachnetzwerke zur redundanzarmen Codierung von Texten (1989) 0.03
    0.028001986 = product of:
      0.22401589 = sum of:
        0.22401589 = weight(_text_:hochschule in 2176) [ClassicSimilarity], result of:
          0.22401589 = score(doc=2176,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.9456169 = fieldWeight in 2176, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.109375 = fieldNorm(doc=2176)
      0.125 = coord(1/8)
    
    Imprint
    Darmstadt : Technische Hochschule
  7. Nissim, M.; Zaninello, A,: Modeling the internal variability of multiword expressions through a pattern-based method (2013) 0.03
    0.02598055 = product of:
      0.1039222 = sum of:
        0.07508315 = weight(_text_:supported in 990) [ClassicSimilarity], result of:
          0.07508315 = score(doc=990,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.3271659 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
        0.028839052 = weight(_text_:work in 990) [ClassicSimilarity], result of:
          0.028839052 = score(doc=990,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
      0.25 = coord(2/8)
    
    Abstract
    The issue of internal variability of multiword expressions (MWEs) is crucial towards their identification and extraction in running text.We present a corpus-supported and computational study on Italian MWEs, aimed at defining an automatic method for modeling internal variation, exploiting frequency and part-of-speech (POS) information. We do so by deriving an XML-encoded lexicon of MWEs based on a manually compiled dictionary, which is then projected onto a a large corpus. Since a search for fixed forms suffers from low recall, while an unconstrained flexible search for lemmas yields a loss in precision, we suggest a procedure aimed at maximizing precision in the identification of MWEs within a flexible search. Our method builds on the idea that internal variability can be modelled via the novel introduction of variation patterns, which work over POS patterns, and can be used as working tools for controlling precision. We also compare the performance of variation patterns to that of association measures, and explore the possibility of using variation patterns in MWE extraction in addition to identification. Finally, we suggest that corpus-derived, pattern-related information can be included in the original MWE lexicon by means of an enriched coding and the creation of an XML-based repository of patterns.
  8. Schürmann, H.: Software scannt Radio- und Fernsehsendungen : Recherche in Nachrichtenarchiven erleichtert (2001) 0.02
    0.02209736 = product of:
      0.08838944 = sum of:
        0.07920158 = weight(_text_:hochschule in 5759) [ClassicSimilarity], result of:
          0.07920158 = score(doc=5759,freq=4.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.33432606 = fieldWeight in 5759, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5759)
        0.009187864 = product of:
          0.018375728 = sum of:
            0.018375728 = weight(_text_:22 in 5759) [ClassicSimilarity], result of:
              0.018375728 = score(doc=5759,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.1354154 = fieldWeight in 5759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5759)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Content
    Um Firmen und Agenturen die Beobachtungen von Medien zu erleichtern, entwickeln Forscher an der Duisburger Hochschule zurzeit ein System zur automatischen Themenerkennung in Rundfunk und Fernsehen. Das so genannte Alert-System soll dem Nutzer helfen, die für ihn relevanten Sprachinformationen aus Nachrichtensendungen herauszufiltem und weiterzuverarbeiten. Durch die automatische Analyse durch den Computer können mehrere Programme rund um die Uhr beobachtet werden. Noch erfolgt die Informationsgewinnung aus TV- und Radiosendungen auf klassischem Wege: Ein Mensch sieht, hört, liest und wertet aus. Das ist enorm zeitaufwendig und für eine Firma, die beispielsweise die Konkurrenz beobachten oder ihre Medienpräsenz dokumentieren lassen möchte, auch sehr teuer. Diese Arbeit ließe sich mit einem Spracherkenner automatisieren, sagten sich die Duisburger Forscher. Sie arbeiten nun zusammen mit Partnern aus Deutschland, Frankreich und Portugal in einem europaweiten Projekt an der Entwicklung einer entsprechenden Technologie (http://alert.uni-duisburg.de). An dem Projekt sind auch zwei Medienbeobachtungsuntemehmen beteiligt, die Oberserver Argus Media GmbH aus Baden-Baden und das französische Unternehmen Secodip. Unsere Arbeit würde schon dadurch erleichtert, wenn Informationen, die über unsere Kunden in den Medien erscheinen, vorselektiert würden", beschreibt Simone Holderbach, Leiterin der Produktentwicklung bei Oberserver, ihr Interesse an der Technik. Und wie funktioniert Alert? Das Spracherkennungssystem wird darauf getrimmt, Nachrichtensendungen in Radio und Fernsehen zu überwachen: Alles, was gesagt wird - sei es vom Nachrichtensprecher, Reporter oder Interviewten -, wird durch die automatische Spracherkennung in Text umgewandelt. Dabei werden Themen und Schlüsselwörter erkannt und gespeichert. Diese werden mit den Suchbegriffen des Nutzers verglichen. Gefundene Übereinstimmungen werden angezeigt und dem Benutzer automatisch mitgeteilt. Konventionelle Spracherkennungstechnik sei für die Medienbeobachtung nicht einsetzbar, da diese für einen anderen Zweck entwickelt worden sei, betont Prof. Gerhard Rigoll, Leiter des Fachgebiets Technische Informatik an der Duisburger Hochschule. Für die Umwandlung von Sprache in Text wurde die Alert-Software gründlich trainiert. Aus Zeitungstexten, Audio- und Video-Material wurden bislang rund 3 50 Millionen Wörter verarbeitet. Das System arbeitet in drei Sprachen. Doch so ganz fehlerfrei sei der automatisch gewonnene Text nicht, räumt Rigoll ein. Zurzeit liegt die Erkennungsrate bei 40 bis 70 Prozent. Und das wird sich in absehbarer Zeit auch nicht ändern." Musiküberlagerungen oder starke Hintergrundgeräusche bei Reportagen führen zu Ungenauigkeiten bei der Textumwandlung. Deshalb haben die, Duisburger Wissenschaftler Methoden entwickelt, die über die herkömmliche Suche nach Schlüsselwörtern hinausgehen und eine inhaltsorientierte Zuordnung ermöglichen. Dadurch erhält der Nutzer dann auch solche Nachrichten, die zwar zum Thema passen, in denen das Stichwort aber gar nicht auftaucht", bringt Rigoll den Vorteil der Technik auf den Punkt. Wird beispielsweise "Ölpreis" als Suchbegriff eingegeben, werden auch solche Nachrichten angezeigt, in denen Olkonzerne und Energieagenturen eine Rolle spielen. Rigoll: Das Alert-System liest sozusagen zwischen den Zeilen!' Das Forschungsprojekt wurde vor einem Jahr gestartet und läuft noch bis Mitte 2002. Wer sich über den Stand der Technik informieren möchte, kann dies in dieser Woche auf der Industriemesse in Hannover. Das Alert-System wird auf dem Gemeinschaftsstand "Forschungsland NRW" in Halle 18, Stand M12, präsentiert
    Source
    Handelsblatt. Nr.79 vom 24.4.2001, S.22
  9. Informationslinguistische Texterschließung (1986) 0.02
    0.019951655 = product of:
      0.15961324 = sum of:
        0.15961324 = product of:
          0.31922647 = sum of:
            0.31922647 = weight(_text_:aufsatzsammlung in 186) [ClassicSimilarity], result of:
              0.31922647 = score(doc=186,freq=24.0), product of:
                0.25424787 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03875087 = queryNorm
                1.2555718 = fieldWeight in 186, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=186)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    RSWK
    Information Retrieval / Aufsatzsammlung (DNB)
    Automatische Sprachanalyse / Morphologie / Aufsatzsammlung (SBB / GBV)
    Automatische Sprachanalyse / Morphologie <Linguistik> / Aufsatzsammlung (DNB)
    Linguistische Datenverarbeitung / Linguistik / Aufsatzsammlung (SWB)
    Linguistik / Information Retrieval / Aufsatzsammlung (SWB / BVB)
    Linguistische Datenverarbeitung / Textanalyse / Aufsatzsammlung (BVB)
    Subject
    Information Retrieval / Aufsatzsammlung (DNB)
    Automatische Sprachanalyse / Morphologie / Aufsatzsammlung (SBB / GBV)
    Automatische Sprachanalyse / Morphologie <Linguistik> / Aufsatzsammlung (DNB)
    Linguistische Datenverarbeitung / Linguistik / Aufsatzsammlung (SWB)
    Linguistik / Information Retrieval / Aufsatzsammlung (SWB / BVB)
    Linguistische Datenverarbeitung / Textanalyse / Aufsatzsammlung (BVB)
  10. Semantik, Lexikographie und Computeranwendungen : Workshop ... (Bonn) : 1995.01.27-28 (1996) 0.02
    0.019571837 = product of:
      0.1565747 = sum of:
        0.1565747 = sum of:
          0.13032366 = weight(_text_:aufsatzsammlung in 190) [ClassicSimilarity], result of:
            0.13032366 = score(doc=190,freq=4.0), product of:
              0.25424787 = queryWeight, product of:
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.03875087 = queryNorm
              0.51258504 = fieldWeight in 190, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
          0.02625104 = weight(_text_:22 in 190) [ClassicSimilarity], result of:
            0.02625104 = score(doc=190,freq=2.0), product of:
              0.13569894 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03875087 = queryNorm
              0.19345059 = fieldWeight in 190, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=190)
      0.125 = coord(1/8)
    
    Date
    14. 4.2007 10:04:22
    RSWK
    Computer / Anwendung / Computerunterstützte Lexikographie / Aufsatzsammlung
    Subject
    Computer / Anwendung / Computerunterstützte Lexikographie / Aufsatzsammlung
  11. Roberts, C.W.; Popping, R.: Computer-supported content analysis : some recent developments (1993) 0.02
    0.018770788 = product of:
      0.1501663 = sum of:
        0.1501663 = weight(_text_:supported in 4236) [ClassicSimilarity], result of:
          0.1501663 = score(doc=4236,freq=2.0), product of:
            0.22949564 = queryWeight, product of:
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.03875087 = queryNorm
            0.6543318 = fieldWeight in 4236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9223356 = idf(docFreq=321, maxDocs=44218)
              0.078125 = fieldNorm(doc=4236)
      0.125 = coord(1/8)
    
  12. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.02
    0.017128248 = product of:
      0.06851299 = sum of:
        0.049950704 = weight(_text_:work in 2541) [ClassicSimilarity], result of:
          0.049950704 = score(doc=2541,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.35119468 = fieldWeight in 2541, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.018562287 = product of:
          0.037124574 = sum of:
            0.037124574 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.037124574 = score(doc=2541,freq=4.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  13. Schönbächler, E.; Strasser, T.; Himpsl-Gutermann, K.: Vom Chat zum Check : Informationskompetenz mit ChatGPT steigern (2023) 0.01
    0.014000993 = product of:
      0.112007946 = sum of:
        0.112007946 = weight(_text_:hochschule in 924) [ClassicSimilarity], result of:
          0.112007946 = score(doc=924,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.47280845 = fieldWeight in 924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.0546875 = fieldNorm(doc=924)
      0.125 = coord(1/8)
    
    Abstract
    Der Beitrag greift den aktuellen Diskurs um die KI-Anwendung ChatGPT und deren Bedeutung in Schule und Hochschule auf. Dabei werden durch einen Überblick über verschiedene Assistenzsysteme, die auf Künstlicher Intelligenz beruhen, Grundlagen und Unterschiede herausgearbeitet. Der Bereich der Chatbots wird näher beleuchtet, die beiden grundlegenden Arten des regelbasierten Chatbots und des Machine Learning Bots werden anhand von anschaulichen Beispielen praxisnah erklärt. Schließlich wird herausgearbeitet, dass Informationskompetenz als Schlüsselkompetenz des 21. Jahrhunderts auch die wesentliche Grundlage dafür ist, im Bildungsbereich konstruktiv mit KI-Systemen wie ChatGPT umzugehen und die wesentlichen Funktionsmechanismen zu verstehen. Ein Unterrichtsentwurf zum Thema "Biene" schließt den Praxisbeitrag ab.
  14. Schröter, F.; Meyer, U.: Entwicklung sprachlicher Handlungskompetenz in Englisch mit Hilfe eines Multimedia-Sprachlernsystems (2000) 0.01
    0.012000851 = product of:
      0.09600681 = sum of:
        0.09600681 = weight(_text_:hochschule in 5567) [ClassicSimilarity], result of:
          0.09600681 = score(doc=5567,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.40526438 = fieldWeight in 5567, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.046875 = fieldNorm(doc=5567)
      0.125 = coord(1/8)
    
    Abstract
    Firmen handeln zunehmend global. Daraus ergibt sich für die Mehrzahl der Mitarbeiter solcher Unternehmen die Notwendigkeit, die englische Sprache, die "lingua franca" der weltweiten Geschäftsbeziehungen, zu beherrschen, um sie wirkungsvoll einsetzen zu können - und dies auch unter interkulturellem Aspekt. Durch die Globalisierung ist es unmöglich geworden, ohne Fremdsprachenkenntnisse am freien Markt zu agieren." (Trends in der Personalentwicklung, PEF-Consulting, Wien) Das Erreichen interkultureller Handlungskompetenz in der Fremdsprache ist das Ziel des SprachIernsystems ,Sunpower - Communication Strategies in English for Business Purposes", das am Fachbereich Sprachen der Fachhochschule Köln entstanden und im Frühjahr dieses Jahres auf dem Markt erschienen ist. Das Lernsystem ist in Kooperation des Fachbereichs Sprachen der Fachhochschule Köln mit einer englischen Solarenergie-Firma, einer Management Consulting Agentur und der Sprachenabteilung einer Londoner Hochschule entstanden
  15. Scherer Auberson, K.: Counteracting concept drift in natural language classifiers : proposal for an automated method (2018) 0.01
    0.012000851 = product of:
      0.09600681 = sum of:
        0.09600681 = weight(_text_:hochschule in 2849) [ClassicSimilarity], result of:
          0.09600681 = score(doc=2849,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.40526438 = fieldWeight in 2849, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.046875 = fieldNorm(doc=2849)
      0.125 = coord(1/8)
    
    Imprint
    Chur : Hochschule für Technik und Wirtschaft / Arbeitsbereich Informationswissenschaft
  16. ISO/DIS 1087-2:1994-09: Terminology work, vocabulary : pt.2: computational aids (1994) 0.01
    0.01153562 = product of:
      0.09228496 = sum of:
        0.09228496 = weight(_text_:work in 2912) [ClassicSimilarity], result of:
          0.09228496 = score(doc=2912,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.6488395 = fieldWeight in 2912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.125 = fieldNorm(doc=2912)
      0.125 = coord(1/8)
    
  17. Multi-source, multilingual information extraction and summarization (2013) 0.01
    0.008145229 = product of:
      0.06516183 = sum of:
        0.06516183 = product of:
          0.13032366 = sum of:
            0.13032366 = weight(_text_:aufsatzsammlung in 978) [ClassicSimilarity], result of:
              0.13032366 = score(doc=978,freq=4.0), product of:
                0.25424787 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03875087 = queryNorm
                0.51258504 = fieldWeight in 978, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=978)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    RSWK
    Natürlichsprachiges System / Information Extraction / Automatische Inhaltsanalyse / Zusammenfassung / Aufsatzsammlung
    Subject
    Natürlichsprachiges System / Information Extraction / Automatische Inhaltsanalyse / Zusammenfassung / Aufsatzsammlung
  18. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.01
    0.007137301 = product of:
      0.057098407 = sum of:
        0.057098407 = weight(_text_:work in 267) [ClassicSimilarity], result of:
          0.057098407 = score(doc=267,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.40144894 = fieldWeight in 267, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
      0.125 = coord(1/8)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  19. Information und Sprache : Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen (2006) 0.01
    0.006516183 = product of:
      0.052129462 = sum of:
        0.052129462 = product of:
          0.104258925 = sum of:
            0.104258925 = weight(_text_:aufsatzsammlung in 91) [ClassicSimilarity], result of:
              0.104258925 = score(doc=91,freq=16.0), product of:
                0.25424787 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03875087 = queryNorm
                0.41006804 = fieldWeight in 91, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.015625 = fieldNorm(doc=91)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    RSWK
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Information Retrieval / Aufsatzsammlung
    Automatische Indexierung / Aufsatzsammlung
    Linguistische Datenverarbeitung / Aufsatzsammlung
    Subject
    Informations- und Dokumentationswissenschaft / Aufsatzsammlung
    Information Retrieval / Aufsatzsammlung
    Automatische Indexierung / Aufsatzsammlung
    Linguistische Datenverarbeitung / Aufsatzsammlung
  20. Semantic role universals and argument linking : theoretical, typological, and psycholinguistic perspectives (2006) 0.01
    0.006516183 = product of:
      0.052129462 = sum of:
        0.052129462 = product of:
          0.104258925 = sum of:
            0.104258925 = weight(_text_:aufsatzsammlung in 3670) [ClassicSimilarity], result of:
              0.104258925 = score(doc=3670,freq=4.0), product of:
                0.25424787 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03875087 = queryNorm
                0.41006804 = fieldWeight in 3670, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3670)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    RSWK
    Thematische Relation / Aufsatzsammlung (BVB)
    Subject
    Thematische Relation / Aufsatzsammlung (BVB)

Years

Languages

  • e 89
  • d 23
  • chi 1
  • More… Less…

Types

  • a 87
  • m 13
  • el 12
  • s 8
  • x 4
  • p 3
  • d 2
  • n 1
  • More… Less…

Subjects

Classifications