Search (196 results, page 1 of 10)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.104570225 = sum of:
      0.08326228 = product of:
        0.24978682 = sum of:
          0.24978682 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24978682 = score(doc=562,freq=2.0), product of:
              0.44444627 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05242341 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021307945 = product of:
        0.04261589 = sum of:
          0.04261589 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.04261589 = score(doc=562,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.09
    0.09385974 = sum of:
      0.08326228 = product of:
        0.24978682 = sum of:
          0.24978682 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.24978682 = score(doc=862,freq=2.0), product of:
              0.44444627 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05242341 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
      0.010597459 = product of:
        0.021194918 = sum of:
          0.021194918 = weight(_text_:2 in 862) [ClassicSimilarity], result of:
            0.021194918 = score(doc=862,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.16371232 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.07
    0.07444594 = product of:
      0.14889188 = sum of:
        0.14889188 = sum of:
          0.049454812 = weight(_text_:2 in 3164) [ClassicSimilarity], result of:
            0.049454812 = score(doc=3164,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.38199544 = fieldWeight in 3164, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.109375 = fieldNorm(doc=3164)
          0.09943707 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
            0.09943707 = score(doc=3164,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.5416616 = fieldWeight in 3164, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3164)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  4. Somers, H.: Example-based machine translation : Review article (1999) 0.07
    0.07444594 = product of:
      0.14889188 = sum of:
        0.14889188 = sum of:
          0.049454812 = weight(_text_:2 in 6672) [ClassicSimilarity], result of:
            0.049454812 = score(doc=6672,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.38199544 = fieldWeight in 6672, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.109375 = fieldNorm(doc=6672)
          0.09943707 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
            0.09943707 = score(doc=6672,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.5416616 = fieldWeight in 6672, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6672)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 14(1999) no.2, S.113-157
  5. New tools for human translators (1997) 0.07
    0.07444594 = product of:
      0.14889188 = sum of:
        0.14889188 = sum of:
          0.049454812 = weight(_text_:2 in 1179) [ClassicSimilarity], result of:
            0.049454812 = score(doc=1179,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.38199544 = fieldWeight in 1179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.109375 = fieldNorm(doc=1179)
          0.09943707 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
            0.09943707 = score(doc=1179,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.5416616 = fieldWeight in 1179, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=1179)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 12(1997) nos.1/2, S.1-194
  6. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.06
    0.06381081 = product of:
      0.12762162 = sum of:
        0.12762162 = sum of:
          0.042389836 = weight(_text_:2 in 4483) [ClassicSimilarity], result of:
            0.042389836 = score(doc=4483,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.32742465 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
          0.08523178 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
            0.08523178 = score(doc=4483,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.46428138 = fieldWeight in 4483, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=4483)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
  7. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.05
    0.048393354 = product of:
      0.09678671 = sum of:
        0.09678671 = sum of:
          0.03996552 = weight(_text_:2 in 6752) [ClassicSimilarity], result of:
            0.03996552 = score(doc=6752,freq=4.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.30869892 = fieldWeight in 6752, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
          0.056821186 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
            0.056821186 = score(doc=6752,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.30952093 = fieldWeight in 6752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6752)
      0.5 = coord(1/2)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
    Source
    Artificial intelligence. 85(1996) nos.1/2, S.101-134
  8. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.04
    0.04254054 = product of:
      0.08508108 = sum of:
        0.08508108 = sum of:
          0.028259892 = weight(_text_:2 in 6753) [ClassicSimilarity], result of:
            0.028259892 = score(doc=6753,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.2182831 = fieldWeight in 6753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
          0.056821186 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
            0.056821186 = score(doc=6753,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.30952093 = fieldWeight in 6753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6753)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
    Source
    Artificial intelligence. 85(1996) nos.1/2, S.59-99
  9. Ruge, G.: Sprache und Computer : Wortbedeutung und Termassoziation. Methoden zur automatischen semantischen Klassifikation (1995) 0.04
    0.04254054 = product of:
      0.08508108 = sum of:
        0.08508108 = sum of:
          0.028259892 = weight(_text_:2 in 1534) [ClassicSimilarity], result of:
            0.028259892 = score(doc=1534,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.2182831 = fieldWeight in 1534, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0625 = fieldNorm(doc=1534)
          0.056821186 = weight(_text_:22 in 1534) [ClassicSimilarity], result of:
            0.056821186 = score(doc=1534,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.30952093 = fieldWeight in 1534, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1534)
      0.5 = coord(1/2)
    
    Content
    Enthält folgende Kapitel: (1) Motivation; (2) Language philosophical foundations; (3) Structural comparison of extensions; (4) Earlier approaches towards term association; (5) Experiments; (6) Spreading-activation networks or memory models; (7) Perspective. Appendices: Heads and modifiers of 'car'. Glossary. Index. Language and computer. Word semantics and term association. Methods towards an automatic semantic classification
    Footnote
    Rez. in: Knowledge organization 22(1995) no.3/4, S.182-184 (M.T. Rolland)
  10. Morris, V.: Automated language identification of bibliographic resources (2020) 0.04
    0.04254054 = product of:
      0.08508108 = sum of:
        0.08508108 = sum of:
          0.028259892 = weight(_text_:2 in 5749) [ClassicSimilarity], result of:
            0.028259892 = score(doc=5749,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.2182831 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
          0.056821186 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
            0.056821186 = score(doc=5749,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.30952093 = fieldWeight in 5749, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=5749)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 19:04:22
  11. Kay, M.: ¬The proper place of men and machines in language translation (1997) 0.04
    0.03722297 = product of:
      0.07444594 = sum of:
        0.07444594 = sum of:
          0.024727406 = weight(_text_:2 in 1178) [ClassicSimilarity], result of:
            0.024727406 = score(doc=1178,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.19099772 = fieldWeight in 1178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1178)
          0.049718536 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
            0.049718536 = score(doc=1178,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.2708308 = fieldWeight in 1178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1178)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 12(1997) nos.1/2, S.3-23
  12. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.04
    0.03722297 = product of:
      0.07444594 = sum of:
        0.07444594 = sum of:
          0.024727406 = weight(_text_:2 in 2345) [ClassicSimilarity], result of:
            0.024727406 = score(doc=2345,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.19099772 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
          0.049718536 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
            0.049718536 = score(doc=2345,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.2708308 = fieldWeight in 2345, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2345)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  13. Melby, A.: Some notes on 'The proper place of men and machines in language translation' (1997) 0.04
    0.03722297 = product of:
      0.07444594 = sum of:
        0.07444594 = sum of:
          0.024727406 = weight(_text_:2 in 330) [ClassicSimilarity], result of:
            0.024727406 = score(doc=330,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.19099772 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=330)
          0.049718536 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
            0.049718536 = score(doc=330,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.2708308 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=330)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
    Source
    Machine translation. 12(1997) nos.1/2, S.29-34
  14. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.03
    0.03394287 = product of:
      0.06788574 = sum of:
        0.06788574 = sum of:
          0.017662432 = weight(_text_:2 in 2541) [ClassicSimilarity], result of:
            0.017662432 = score(doc=2541,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.13642694 = fieldWeight in 2541, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
          0.050223306 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
            0.050223306 = score(doc=2541,freq=4.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.27358043 = fieldWeight in 2541, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2541)
      0.5 = coord(1/2)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  15. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.03
    0.031905405 = product of:
      0.06381081 = sum of:
        0.06381081 = sum of:
          0.021194918 = weight(_text_:2 in 1848) [ClassicSimilarity], result of:
            0.021194918 = score(doc=1848,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.16371232 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
          0.04261589 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
            0.04261589 = score(doc=1848,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.23214069 = fieldWeight in 1848, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1848)
      0.5 = coord(1/2)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
  16. Fóris, A.: Network theory and terminology (2013) 0.03
    0.030245846 = product of:
      0.060491692 = sum of:
        0.060491692 = sum of:
          0.024978451 = weight(_text_:2 in 1365) [ClassicSimilarity], result of:
            0.024978451 = score(doc=1365,freq=4.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.19293682 = fieldWeight in 1365, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1365)
          0.03551324 = weight(_text_:22 in 1365) [ClassicSimilarity], result of:
            0.03551324 = score(doc=1365,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.19345059 = fieldWeight in 1365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1365)
      0.5 = coord(1/2)
    
    Date
    2. 9.2014 19:19:40
    2. 9.2014 21:22:48
  17. Warner, A.J.: Natural language processing (1987) 0.03
    0.028410593 = product of:
      0.056821186 = sum of:
        0.056821186 = product of:
          0.11364237 = sum of:
            0.11364237 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.11364237 = score(doc=337,freq=2.0), product of:
                0.18357785 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05242341 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  18. Pinker, S.: Wörter und Regeln : Die Natur der Sprache (2000) 0.03
    0.026587836 = product of:
      0.053175673 = sum of:
        0.053175673 = sum of:
          0.017662432 = weight(_text_:2 in 734) [ClassicSimilarity], result of:
            0.017662432 = score(doc=734,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.13642694 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
          0.03551324 = weight(_text_:22 in 734) [ClassicSimilarity], result of:
            0.03551324 = score(doc=734,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.19345059 = fieldWeight in 734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=734)
      0.5 = coord(1/2)
    
    Date
    19. 7.2002 14:22:31
    Isbn
    3-8274-0297-2
  19. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.024859268 = product of:
      0.049718536 = sum of:
        0.049718536 = product of:
          0.09943707 = sum of:
            0.09943707 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09943707 = score(doc=4506,freq=2.0), product of:
                0.18357785 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05242341 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  20. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.024859268 = product of:
      0.049718536 = sum of:
        0.049718536 = product of:
          0.09943707 = sum of:
            0.09943707 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09943707 = score(doc=3117,freq=2.0), product of:
                0.18357785 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05242341 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22

Languages

  • e 123
  • d 60
  • ru 8
  • f 2
  • m 2
  • More… Less…

Types

  • a 156
  • m 23
  • el 19
  • s 9
  • x 4
  • p 3
  • d 2
  • b 1
  • n 1
  • r 1
  • More… Less…

Classifications