Search (715 results, page 1 of 36)

  • × theme_ss:"Computerlinguistik"
  1. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.06
    0.064709775 = sum of:
      0.0061848112 = product of:
        0.0556633 = sum of:
          0.0556633 = weight(_text_:p in 5429) [ClassicSimilarity], result of:
            0.0556633 = score(doc=5429,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.47670212 = fieldWeight in 5429, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.09375 = fieldNorm(doc=5429)
        0.11111111 = coord(1/9)
      0.058524963 = sum of:
        0.0057245493 = weight(_text_:a in 5429) [ClassicSimilarity], result of:
          0.0057245493 = score(doc=5429,freq=2.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.15287387 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
        0.052800413 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
          0.052800413 = score(doc=5429,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.46428138 = fieldWeight in 5429, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=5429)
    
    Source
    c't. 2000, H.22, S.230-231
    Type
    a
  2. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.05
    0.053924814 = sum of:
      0.0051540094 = product of:
        0.046386085 = sum of:
          0.046386085 = weight(_text_:p in 5428) [ClassicSimilarity], result of:
            0.046386085 = score(doc=5428,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.39725178 = fieldWeight in 5428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.078125 = fieldNorm(doc=5428)
        0.11111111 = coord(1/9)
      0.048770804 = sum of:
        0.0047704573 = weight(_text_:a in 5428) [ClassicSimilarity], result of:
          0.0047704573 = score(doc=5428,freq=2.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.12739488 = fieldWeight in 5428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=5428)
        0.044000346 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
          0.044000346 = score(doc=5428,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.38690117 = fieldWeight in 5428, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=5428)
    
    Source
    c't. 2000, H.22, S.220-229
    Type
    a
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.05
    0.049318198 = sum of:
      0.01719344 = product of:
        0.15474096 = sum of:
          0.15474096 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.15474096 = score(doc=562,freq=2.0), product of:
              0.27533096 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03247589 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.11111111 = coord(1/9)
      0.032124758 = sum of:
        0.0057245493 = weight(_text_:a in 562) [ClassicSimilarity], result of:
          0.0057245493 = score(doc=562,freq=8.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.15287387 = fieldWeight in 562, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.026400207 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.026400207 = score(doc=562,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Type
    a
  4. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.05
    0.046956215 = sum of:
      0.004123207 = product of:
        0.037108865 = sum of:
          0.037108865 = weight(_text_:p in 6753) [ClassicSimilarity], result of:
            0.037108865 = score(doc=6753,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.31780142 = fieldWeight in 6753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
        0.11111111 = coord(1/9)
      0.042833008 = sum of:
        0.007632732 = weight(_text_:a in 6753) [ClassicSimilarity], result of:
          0.007632732 = score(doc=6753,freq=8.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.20383182 = fieldWeight in 6753, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
        0.035200275 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
          0.035200275 = score(doc=6753,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.30952093 = fieldWeight in 6753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
    Type
    a
  5. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.04377181 = sum of:
      0.04057169 = product of:
        0.1825726 = sum of:
          0.15474096 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.15474096 = score(doc=862,freq=2.0), product of:
              0.27533096 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03247589 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
          0.02783165 = weight(_text_:p in 862) [ClassicSimilarity], result of:
            0.02783165 = score(doc=862,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.23835106 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.22222222 = coord(2/9)
      0.0032001205 = product of:
        0.006400241 = sum of:
          0.006400241 = weight(_text_:a in 862) [ClassicSimilarity], result of:
            0.006400241 = score(doc=862,freq=10.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.1709182 = fieldWeight in 862, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
    Type
    p
    a
  6. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.04
    0.040191922 = sum of:
      0.0036078063 = product of:
        0.032470256 = sum of:
          0.032470256 = weight(_text_:p in 156) [ClassicSimilarity], result of:
            0.032470256 = score(doc=156,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.27807623 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
        0.11111111 = coord(1/9)
      0.036584117 = sum of:
        0.0057838727 = weight(_text_:a in 156) [ClassicSimilarity], result of:
          0.0057838727 = score(doc=156,freq=6.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.1544581 = fieldWeight in 156, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.030800242 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
          0.030800242 = score(doc=156,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.2708308 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
    Type
    a
  7. Warner, A.J.: Natural language processing (1987) 0.04
    0.03901664 = product of:
      0.07803328 = sum of:
        0.07803328 = sum of:
          0.007632732 = weight(_text_:a in 337) [ClassicSimilarity], result of:
            0.007632732 = score(doc=337,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.20383182 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
          0.07040055 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
            0.07040055 = score(doc=337,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.61904186 = fieldWeight in 337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=337)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
    Type
    a
  8. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.036584117 = product of:
      0.07316823 = sum of:
        0.07316823 = sum of:
          0.011567745 = weight(_text_:a in 4506) [ClassicSimilarity], result of:
            0.011567745 = score(doc=4506,freq=6.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.3089162 = fieldWeight in 4506, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=4506)
          0.061600484 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
            0.061600484 = score(doc=4506,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 4506, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=4506)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
    Source
    Library science with a slant to documentation. 28(1991) no.4, S.125-130
    Type
    a
  9. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.04
    0.035217162 = sum of:
      0.0030924056 = product of:
        0.02783165 = sum of:
          0.02783165 = weight(_text_:p in 1848) [ClassicSimilarity], result of:
            0.02783165 = score(doc=1848,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.23835106 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
        0.11111111 = coord(1/9)
      0.032124758 = sum of:
        0.0057245493 = weight(_text_:a in 1848) [ClassicSimilarity], result of:
          0.0057245493 = score(doc=1848,freq=8.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.15287387 = fieldWeight in 1848, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1848)
        0.026400207 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
          0.026400207 = score(doc=1848,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.23214069 = fieldWeight in 1848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1848)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
    Type
    a
  10. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 3164) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=3164,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 3164, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=3164)
          0.061600484 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
            0.061600484 = score(doc=3164,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 3164, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3164)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
    Type
    a
  11. Somers, H.: Example-based machine translation : Review article (1999) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 6672) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=6672,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 6672, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=6672)
          0.061600484 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
            0.061600484 = score(doc=6672,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 6672, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6672)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
    Type
    a
  12. New tools for human translators (1997) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 1179) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=1179,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 1179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=1179)
          0.061600484 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
            0.061600484 = score(doc=1179,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 1179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=1179)
      0.5 = coord(1/2)
    
    Abstract
    A special issue devoted to the theme of new tools for human tranlators
    Date
    31. 7.1996 9:22:19
  13. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 3117) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=3117,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 3117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=3117)
          0.061600484 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
            0.061600484 = score(doc=3117,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 3117, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3117)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
    Type
    a
  14. ¬Der Student aus dem Computer (2023) 0.03
    0.034139562 = product of:
      0.068279125 = sum of:
        0.068279125 = sum of:
          0.0066786404 = weight(_text_:a in 1079) [ClassicSimilarity], result of:
            0.0066786404 = score(doc=1079,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.17835285 = fieldWeight in 1079, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=1079)
          0.061600484 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
            0.061600484 = score(doc=1079,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.5416616 = fieldWeight in 1079, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=1079)
      0.5 = coord(1/2)
    
    Date
    27. 1.2023 16:22:55
    Type
    a
  15. Luo, L.; Ju, J.; Li, Y.-F.; Haffari, G.; Xiong, B.; Pan, S.: ChatRule: mining logical rules with large language models for knowledge graph reasoning (2023) 0.03
    0.029347636 = sum of:
      0.0025770047 = product of:
        0.023193043 = sum of:
          0.023193043 = weight(_text_:p in 1171) [ClassicSimilarity], result of:
            0.023193043 = score(doc=1171,freq=2.0), product of:
              0.116767466 = queryWeight, product of:
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.03247589 = queryNorm
              0.19862589 = fieldWeight in 1171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5955126 = idf(docFreq=3298, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1171)
        0.11111111 = coord(1/9)
      0.02677063 = sum of:
        0.0047704573 = weight(_text_:a in 1171) [ClassicSimilarity], result of:
          0.0047704573 = score(doc=1171,freq=8.0), product of:
            0.037446223 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03247589 = queryNorm
            0.12739488 = fieldWeight in 1171, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
        0.022000173 = weight(_text_:22 in 1171) [ClassicSimilarity], result of:
          0.022000173 = score(doc=1171,freq=2.0), product of:
            0.11372503 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03247589 = queryNorm
            0.19345059 = fieldWeight in 1171, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1171)
    
    Abstract
    Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.
    Date
    23.11.2023 19:07:22
    Type
    p
  16. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.03
    0.029262481 = product of:
      0.058524963 = sum of:
        0.058524963 = sum of:
          0.0057245493 = weight(_text_:a in 4483) [ClassicSimilarity], result of:
            0.0057245493 = score(doc=4483,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.15287387 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.052800413 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.052800413 = score(doc=4483,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
    Type
    a
  17. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.03
    0.029262481 = product of:
      0.058524963 = sum of:
        0.058524963 = sum of:
          0.0057245493 = weight(_text_:a in 4888) [ClassicSimilarity], result of:
            0.0057245493 = score(doc=4888,freq=2.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.15287387 = fieldWeight in 4888, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.09375 = fieldNorm(doc=4888)
          0.052800413 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
            0.052800413 = score(doc=4888,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.46428138 = fieldWeight in 4888, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4888)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  18. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.03
    0.02613151 = product of:
      0.05226302 = sum of:
        0.05226302 = sum of:
          0.008262675 = weight(_text_:a in 1463) [ClassicSimilarity], result of:
            0.008262675 = score(doc=1463,freq=6.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.22065444 = fieldWeight in 1463, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=1463)
          0.044000346 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
            0.044000346 = score(doc=1463,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.38690117 = fieldWeight in 1463, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1463)
      0.5 = coord(1/2)
    
    Abstract
    Chronicles the early history of applying electronic computers to the task of translating natural languages, from the 1st suggestions by Warren Weaver in Mar 1947 to the 1st demonstration of a working, if limited, program in Jan 1954
    Date
    31. 7.1996 9:22:19
    Type
    a
  19. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.03
    0.025373396 = product of:
      0.05074679 = sum of:
        0.05074679 = sum of:
          0.0067464462 = weight(_text_:a in 1693) [ClassicSimilarity], result of:
            0.0067464462 = score(doc=1693,freq=4.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.18016359 = fieldWeight in 1693, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.078125 = fieldNorm(doc=1693)
          0.044000346 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
            0.044000346 = score(doc=1693,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.38690117 = fieldWeight in 1693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=1693)
      0.5 = coord(1/2)
    
    Date
    22. 3.2015 9:37:18
    Type
    a
  20. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.02
    0.020298716 = product of:
      0.04059743 = sum of:
        0.04059743 = sum of:
          0.005397157 = weight(_text_:a in 6752) [ClassicSimilarity], result of:
            0.005397157 = score(doc=6752,freq=4.0), product of:
              0.037446223 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03247589 = queryNorm
              0.14413087 = fieldWeight in 6752, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
          0.035200275 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
            0.035200275 = score(doc=6752,freq=2.0), product of:
              0.11372503 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03247589 = queryNorm
              0.30952093 = fieldWeight in 6752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
      0.5 = coord(1/2)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
    Type
    a

Languages

Types

  • a 629
  • el 76
  • m 45
  • s 23
  • x 9
  • p 7
  • b 1
  • d 1
  • pat 1
  • r 1
  • More… Less…

Subjects

Classifications