Search (76 results, page 1 of 4)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"el"
  1. Rieger, F.: Lügende Computer (2023) 0.09
    0.08535436 = product of:
      0.12803154 = sum of:
        0.0053093014 = weight(_text_:a in 912) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=912,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=912)
        0.12272224 = sum of:
          0.07375186 = weight(_text_:de in 912) [ClassicSimilarity], result of:
            0.07375186 = score(doc=912,freq=2.0), product of:
              0.19416152 = queryWeight, product of:
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.045180224 = queryNorm
              0.37984797 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.297489 = idf(docFreq=1634, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
          0.048970375 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
            0.048970375 = score(doc=912,freq=2.0), product of:
              0.15821345 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045180224 = queryNorm
              0.30952093 = fieldWeight in 912, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=912)
      0.6666667 = coord(2/3)
    
    Date
    16. 3.2023 19:22:55
    Source
    https://steadyhq.com/de/realitatsabzweig/posts/3ed79605-0650-4725-ab35-43f1243b57ee
    Type
    a
  2. Kurz, C.: Womit sich Strafverfolger bald befassen müssen : ChatGPT (2023) 0.04
    0.038306497 = product of:
      0.05745974 = sum of:
        0.0053093014 = weight(_text_:a in 203) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=203,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=203)
        0.05215044 = product of:
          0.10430088 = sum of:
            0.10430088 = weight(_text_:de in 203) [ClassicSimilarity], result of:
              0.10430088 = score(doc=203,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.53718615 = fieldWeight in 203, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=203)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://netzpolitik.org/2023/chatgpt-womit-sich-strafverfolger-bald-befassen-muessen/?utm_source=pocket-newtab-global-de-DE#!
    Type
    a
  3. Bischoff, M.: Wie eine KI lernt, sich selbst zu erklären (2023) 0.04
    0.038306497 = product of:
      0.05745974 = sum of:
        0.0053093014 = weight(_text_:a in 956) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=956,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 956, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=956)
        0.05215044 = product of:
          0.10430088 = sum of:
            0.10430088 = weight(_text_:de in 956) [ClassicSimilarity], result of:
              0.10430088 = score(doc=956,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.53718615 = fieldWeight in 956, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=956)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    https://www.spektrum.de/news/sprachmodelle-auf-dem-weg-zu-einer-erklaerbaren-ki/2132727#Echobox=1682669561?utm_source=pocket-newtab-global-de-DE
    Type
    a
  4. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.029794488 = product of:
      0.04469173 = sum of:
        0.007963953 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
          0.007963953 = score(doc=4888,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.15287387 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.03672778 = product of:
          0.07345556 = sum of:
            0.07345556 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07345556 = score(doc=4888,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 3.2013 14:56:22
  5. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.03
    0.02843627 = product of:
      0.042654403 = sum of:
        0.010387965 = weight(_text_:a in 668) [ClassicSimilarity], result of:
          0.010387965 = score(doc=668,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.19940455 = fieldWeight in 668, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=668)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 668) [ClassicSimilarity], result of:
              0.064532876 = score(doc=668,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
    Type
    a
  6. Stieler, W.: Anzeichen von Bewusstsein bei ChatGPT und Co.? (2023) 0.02
    0.024608051 = product of:
      0.036912076 = sum of:
        0.0046456386 = weight(_text_:a in 1047) [ClassicSimilarity], result of:
          0.0046456386 = score(doc=1047,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.089176424 = fieldWeight in 1047, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1047)
        0.032266438 = product of:
          0.064532876 = sum of:
            0.064532876 = weight(_text_:de in 1047) [ClassicSimilarity], result of:
              0.064532876 = score(doc=1047,freq=2.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.33236697 = fieldWeight in 1047, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1047)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ein interdisziplinäres Forschungsteam hat eine Liste von Eigenschaften aufgestellt, die auf Bewusstsein deuten, und aktuelle KI-Systeme darauf abgeklopft. Ein interdisziplinäres Forscherteam hat ein Paper [https://arxiv.org/abs/2308.08708] veröffentlicht, das eine Liste von 14 "Indikatoren" für Bewusstsein enthält, die aus sechs aktuellen Theorien über das Bewusstsein stammen. Aktuelle KI-Modelle wie GPT-3, Palm-E oder AdA von Deepmind weisen demnach einzelne dieser Indikatoren auf. "Es spricht viel dafür, dass die meisten oder alle Bedingungen für das Bewusstsein, die von derzeitigen Theorien vorgeschlagenen Bedingungen für das Bewusstsein mit den bestehenden Techniken der KI erfüllt werden können", schreiben die Autoren. Zum Team gehörte auch der Deep-Learning-Pionier Yoshua Bengio von der Université de Montréal.
    Type
    a
  7. Scobel, G.: GPT: Eine Software, die die Welt verändert (2023) 0.02
    0.02172935 = product of:
      0.06518805 = sum of:
        0.06518805 = product of:
          0.1303761 = sum of:
            0.1303761 = weight(_text_:de in 839) [ClassicSimilarity], result of:
              0.1303761 = score(doc=839,freq=4.0), product of:
                0.19416152 = queryWeight, product of:
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.045180224 = queryNorm
                0.6714827 = fieldWeight in 839, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.297489 = idf(docFreq=1634, maxDocs=44218)
                  0.078125 = fieldNorm(doc=839)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    https://www.zdf.de/nachrichten/panorama/gpt-ki-literatur-terrax-gert-scobel-kolumne-100.html?utm_source=pocket-newtab-global-de-DE
  8. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.02
    0.019862993 = product of:
      0.029794488 = sum of:
        0.0053093014 = weight(_text_:a in 835) [ClassicSimilarity], result of:
          0.0053093014 = score(doc=835,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.10191591 = fieldWeight in 835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=835)
        0.024485188 = product of:
          0.048970375 = sum of:
            0.048970375 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.048970375 = score(doc=835,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    29.12.2022 18:22:55
    Type
    a
  9. Rötzer, F.: KI-Programm besser als Menschen im Verständnis natürlicher Sprache (2018) 0.01
    0.009931496 = product of:
      0.014897244 = sum of:
        0.0026546507 = weight(_text_:a in 4217) [ClassicSimilarity], result of:
          0.0026546507 = score(doc=4217,freq=2.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.050957955 = fieldWeight in 4217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4217)
        0.012242594 = product of:
          0.024485188 = sum of:
            0.024485188 = weight(_text_:22 in 4217) [ClassicSimilarity], result of:
              0.024485188 = score(doc=4217,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.15476047 = fieldWeight in 4217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4217)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2018 11:32:44
    Type
    a
  10. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.008161729 = product of:
      0.024485188 = sum of:
        0.024485188 = product of:
          0.048970375 = sum of:
            0.048970375 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.048970375 = score(doc=1490,freq=2.0), product of:
                0.15821345 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045180224 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2015 9:30:24
  11. Wordhoard (o.J.) 0.00
    0.0040970687 = product of:
      0.012291206 = sum of:
        0.012291206 = weight(_text_:a in 3922) [ClassicSimilarity], result of:
          0.012291206 = score(doc=3922,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.23593865 = fieldWeight in 3922, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3922)
      0.33333334 = coord(1/3)
    
    Abstract
    WordHoard defines a multiword unit as a special type of collocate in which the component words comprise a meaningful phrase. For example, "Knight of the Round Table" is a meaningful multiword unit or phrase. WordHoard uses the notion of a pseudo-bigram to generalize the computation of bigram (two word) statistical measures to phrases (n-grams) longer than two words, and to allow comparisons of these measures for phrases with different word counts. WordHoard applies the localmaxs algorithm of Silva et al. to the pseudo-bigrams to identify potential compositional phrases that "stand out" in a text. WordHoard can also filter two and three word phrases using the word class filters suggested by Justeson and Katz.
    Type
    a
  12. WordHoard: finding multiword units (20??) 0.00
    0.0040970687 = product of:
      0.012291206 = sum of:
        0.012291206 = weight(_text_:a in 1123) [ClassicSimilarity], result of:
          0.012291206 = score(doc=1123,freq=14.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.23593865 = fieldWeight in 1123, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1123)
      0.33333334 = coord(1/3)
    
    Abstract
    WordHoard defines a multiword unit as a special type of collocate in which the component words comprise a meaningful phrase. For example, "Knight of the Round Table" is a meaningful multiword unit or phrase. WordHoard uses the notion of a pseudo-bigram to generalize the computation of bigram (two word) statistical measures to phrases (n-grams) longer than two words, and to allow comparisons of these measures for phrases with different word counts. WordHoard applies the localmaxs algorithm of Silva et al. to the pseudo-bigrams to identify potential compositional phrases that "stand out" in a text. WordHoard can also filter two and three word phrases using the word class filters suggested by Justeson and Katz.
    Type
    a
  13. Sebastiani, F.: ¬A tutorial an automated text categorisation (1999) 0.00
    0.0039819763 = product of:
      0.011945928 = sum of:
        0.011945928 = weight(_text_:a in 3390) [ClassicSimilarity], result of:
          0.011945928 = score(doc=3390,freq=18.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22931081 = fieldWeight in 3390, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3390)
      0.33333334 = coord(1/3)
    
    Abstract
    The automated categorisation (or classification) of texts into topical categories has a long history, dating back at least to 1960. Until the late '80s, the dominant approach to the problem involved knowledge-engineering automatic categorisers, i.e. manually building a set of rules encoding expert knowledge an how to classify documents. In the '90s, with the booming production and availability of on-line documents, automated text categorisation has witnessed an increased and renewed interest. A newer paradigm based an machine learning has superseded the previous approach. Within this paradigm, a general inductive process automatically builds a classifier by "learning", from a set of previously classified documents, the characteristics of one or more categories; the advantages are a very good effectiveness, a considerable savings in terms of expert manpower, and domain independence. In this tutorial we look at the main approaches that have been taken towards automatic text categorisation within the general machine learning paradigm. Issues of document indexing, classifier construction, and classifier evaluation, will be touched upon.
  14. Schmid, H.: Improvements in Part-of-Speech tagging with an application to German (1995) 0.00
    0.00395732 = product of:
      0.01187196 = sum of:
        0.01187196 = weight(_text_:a in 124) [ClassicSimilarity], result of:
          0.01187196 = score(doc=124,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 124, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=124)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a couple of extensions to a basic Markov Model tagger (called TreeTagger) which improve its accuracy when trained on small corpora. The basic tagger was originally developed for English Schmid, 1994. The extensions together reduced error rates on a German test corpus by more than a third.
    Type
    a
  15. Kiela, D.; Clark, S.: Detecting compositionality of multi-word expressions using nearest neighbours in vector space models (2013) 0.00
    0.00395732 = product of:
      0.01187196 = sum of:
        0.01187196 = weight(_text_:a in 1161) [ClassicSimilarity], result of:
          0.01187196 = score(doc=1161,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 1161, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=1161)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a novel unsupervised approach to detecting the compositionality of multi-word expressions. We compute the compositionality of a phrase through substituting the constituent words with their "neighbours" in a semantic vector space and averaging over the distance between the original phrase and the substituted neighbour phrases. Several methods of obtaining neighbours are presented. The results are compared to existing supervised results and achieve state-of-the-art performance on a verb-object dataset of human compositionality ratings.
    Type
    a
  16. ChatGPT : Optimizing language models for dalogue (2022) 0.00
    0.00395732 = product of:
      0.01187196 = sum of:
        0.01187196 = weight(_text_:a in 836) [ClassicSimilarity], result of:
          0.01187196 = score(doc=836,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 836, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=836)
      0.33333334 = coord(1/3)
    
    Abstract
    We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
  17. Lund, B.D.: ¬A chat with ChatGPT : how will AI impact scholarly publishing? (2022) 0.00
    0.00395732 = product of:
      0.01187196 = sum of:
        0.01187196 = weight(_text_:a in 850) [ClassicSimilarity], result of:
          0.01187196 = score(doc=850,freq=10.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.22789092 = fieldWeight in 850, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=850)
      0.33333334 = coord(1/3)
    
    Abstract
    This is a short project that serves as an inspiration for a forthcoming paper, which will explore the technical side of ChatGPT and the ethical issues it presents for academic researchers, which will result in a peer-reviewed publication. This demonstrates that capacities of ChatGPT as a "chatbot" that is far more advanced than many alternatives available today and may even be able to be used to draft entire academic manuscripts for researchers. ChatGPT is available via https://chat.openai.com/chat.
  18. Dunning, T.: Statistical identification of language (1994) 0.00
    0.003754243 = product of:
      0.011262729 = sum of:
        0.011262729 = weight(_text_:a in 3627) [ClassicSimilarity], result of:
          0.011262729 = score(doc=3627,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.2161963 = fieldWeight in 3627, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3627)
      0.33333334 = coord(1/3)
    
    Abstract
    A statistically based program has been written which learns to distinguish between languages. The amount of training text that such a program needs is surprisingly small, and the amount of text needed to make an identification is also quite small. The program incorporates no linguistic presuppositions other than the assumption that text can be encoded as a string of bytes. Such a program can be used to determine which language small bits of text are in. It also shows a potential for what might be called 'statistical philology' in that it may be applied directly to phonetic transcriptions to help elucidate family trees among language dialects. A variant of this program has been shown to be useful as a quality control in biochemistry. In this application, genetic sequences are assumed to be expressions in a language peculiar to the organism from which the sequence is taken. Thus language identification becomes species identification.
  19. Biselli, A.: Unter Generalverdacht durch Algorithmen (2014) 0.00
    0.003754243 = product of:
      0.011262729 = sum of:
        0.011262729 = weight(_text_:a in 809) [ClassicSimilarity], result of:
          0.011262729 = score(doc=809,freq=4.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.2161963 = fieldWeight in 809, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=809)
      0.33333334 = coord(1/3)
    
    Type
    a
  20. Hausser, R.: Grammatical disambiguation : the linear complexity hypothesis for natural language (2020) 0.00
    0.003754243 = product of:
      0.011262729 = sum of:
        0.011262729 = weight(_text_:a in 22) [ClassicSimilarity], result of:
          0.011262729 = score(doc=22,freq=16.0), product of:
            0.05209492 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.045180224 = queryNorm
            0.2161963 = fieldWeight in 22, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=22)
      0.33333334 = coord(1/3)
    
    Abstract
    DBS uses a strictly time-linear derivation order. Therefore the basic computational complexity degree of DBS is linear time. The only way to increase DBS complexity above linear is repeating ambiguity. In natural language, however, repeating ambiguity is prevented by grammatical disambiguation. A classic example of a grammatical ambiguity is the 'garden path' sentence The horse raced by the barn fell. The continuation horse+raced introduces an ambiguity between horse which raced and horse which was raced, leading to two parallel derivation strands up to The horse raced by the barn. Depending on whether the continuation is interpunctuation or a verb, they are grammatically disambiguated, resulting in unambiguous output. A repeated ambiguity occurs in The man who loves the woman who feeds Lucy who Peter loves., with who serving as subject or as object. These readings are grammatically disambiguated by continuing after who with a verb or a noun.
    Type
    a

Years

Languages

  • e 45
  • d 29
  • el 1
  • More… Less…

Types

  • a 56
  • p 5
  • x 1
  • More… Less…