Search (538 results, page 1 of 27)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11580994 = sum of:
      0.07803193 = product of:
        0.23409578 = sum of:
          0.23409578 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23409578 = score(doc=562,freq=2.0), product of:
              0.41652718 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049130294 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.017808583 = weight(_text_:of in 562) [ClassicSimilarity], result of:
        0.017808583 = score(doc=562,freq=10.0), product of:
          0.076827854 = queryWeight, product of:
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.049130294 = queryNorm
          0.23179851 = fieldWeight in 562, product of:
            3.1622777 = tf(freq=10.0), with freq of:
              10.0 = termFreq=10.0
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.019969428 = product of:
        0.039938856 = sum of:
          0.039938856 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.039938856 = score(doc=562,freq=2.0), product of:
              0.17204592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049130294 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  2. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.08
    0.075488076 = product of:
      0.11323211 = sum of:
        0.025185138 = weight(_text_:of in 3414) [ClassicSimilarity], result of:
          0.025185138 = score(doc=3414,freq=20.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.32781258 = fieldWeight in 3414, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3414)
        0.088046975 = product of:
          0.17609395 = sum of:
            0.17609395 = weight(_text_:informetrics in 3414) [ClassicSimilarity], result of:
              0.17609395 = score(doc=3414,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.48744506 = fieldWeight in 3414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study investigates how computational overhead for topic model training may be reduced by selectively removing terms from the vocabulary of text corpora being modeled. We compare the impact of removing singly occurring terms, the top 0.5%, 1% and 5% most frequently occurring terms and both top 0.5% most frequent and singly occurring terms, along with changes in the number of topics modeled (10, 20, 30, 40, 50, 100) using three datasets. Four outcome measures are compared. The removal of singly occurring terms has little impact on outcomes for all of the measures tested. Document discriminative capacity, as measured by the document space density, is reduced by the removal of frequently occurring terms, but increases with higher numbers of topics. Vocabulary size does not greatly influence entropy, but entropy is affected by the number of topics. Finally, topic similarity, as measured by pairwise topic similarity and Jensen-Shannon divergence, decreases with the removal of frequent terms. The findings have implications for information science research in information retrieval and informetrics that makes use of topic modeling.
  3. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.06
    0.063893676 = product of:
      0.09584051 = sum of:
        0.07803193 = product of:
          0.23409578 = sum of:
            0.23409578 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23409578 = score(doc=862,freq=2.0), product of:
                0.41652718 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049130294 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.017808583 = weight(_text_:of in 862) [ClassicSimilarity], result of:
          0.017808583 = score(doc=862,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.23179851 = fieldWeight in 862, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.6666667 = coord(2/3)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  4. Warner, A.J.: Natural language processing (1987) 0.05
    0.04965986 = product of:
      0.07448979 = sum of:
        0.021237973 = weight(_text_:of in 337) [ClassicSimilarity], result of:
          0.021237973 = score(doc=337,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.27643585 = fieldWeight in 337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.125 = fieldNorm(doc=337)
        0.05325181 = product of:
          0.10650362 = sum of:
            0.10650362 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.10650362 = score(doc=337,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  5. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.04
    0.043452375 = product of:
      0.06517856 = sum of:
        0.018583227 = weight(_text_:of in 4506) [ClassicSimilarity], result of:
          0.018583227 = score(doc=4506,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.24188137 = fieldWeight in 4506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=4506)
        0.046595335 = product of:
          0.09319067 = sum of:
            0.09319067 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.09319067 = score(doc=4506,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    8.10.2000 11:52:22
  6. New tools for human translators (1997) 0.04
    0.043452375 = product of:
      0.06517856 = sum of:
        0.018583227 = weight(_text_:of in 1179) [ClassicSimilarity], result of:
          0.018583227 = score(doc=1179,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.24188137 = fieldWeight in 1179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=1179)
        0.046595335 = product of:
          0.09319067 = sum of:
            0.09319067 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.09319067 = score(doc=1179,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A special issue devoted to the theme of new tools for human tranlators
    Date
    31. 7.1996 9:22:19
  7. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.04
    0.043452375 = product of:
      0.06517856 = sum of:
        0.018583227 = weight(_text_:of in 3117) [ClassicSimilarity], result of:
          0.018583227 = score(doc=3117,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.24188137 = fieldWeight in 3117, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=3117)
        0.046595335 = product of:
          0.09319067 = sum of:
            0.09319067 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.09319067 = score(doc=3117,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Relation between meaning, lexical productivity and frequency of use
    Date
    28. 2.1999 10:48:22
  8. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.04
    0.04164342 = product of:
      0.062465128 = sum of:
        0.022526272 = weight(_text_:of in 4483) [ClassicSimilarity], result of:
          0.022526272 = score(doc=4483,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 4483, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4483)
        0.039938856 = product of:
          0.07987771 = sum of:
            0.07987771 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.07987771 = score(doc=4483,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    15. 3.2000 10:22:37
    Source
    Journal of information science. 25(1999) no.2, S.113-131
  9. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.04
    0.03988657 = product of:
      0.059829853 = sum of:
        0.026547467 = weight(_text_:of in 1463) [ClassicSimilarity], result of:
          0.026547467 = score(doc=1463,freq=8.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34554482 = fieldWeight in 1463, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=1463)
        0.033282384 = product of:
          0.06656477 = sum of:
            0.06656477 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06656477 = score(doc=1463,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Chronicles the early history of applying electronic computers to the task of translating natural languages, from the 1st suggestions by Warren Weaver in Mar 1947 to the 1st demonstration of a working, if limited, program in Jan 1954
    Date
    31. 7.1996 9:22:19
  10. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.04
    0.037244894 = product of:
      0.055867337 = sum of:
        0.01592848 = weight(_text_:of in 4888) [ClassicSimilarity], result of:
          0.01592848 = score(doc=4888,freq=2.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.20732689 = fieldWeight in 4888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4888)
        0.039938856 = product of:
          0.07987771 = sum of:
            0.07987771 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.07987771 = score(doc=4888,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    1. 3.2013 14:56:22
  11. Morris, V.: Automated language identification of bibliographic resources (2020) 0.04
    0.035091337 = product of:
      0.052637003 = sum of:
        0.026011098 = weight(_text_:of in 5749) [ClassicSimilarity], result of:
          0.026011098 = score(doc=5749,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.33856338 = fieldWeight in 5749, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.05325181 = score(doc=5749,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes experiments in the use of machine learning techniques at the British Library to assign language codes to catalog records, in order to provide information about the language of content of the resources described. In the first phase of the project, language codes were assigned to 1.15 million records with 99.7% confidence. The automated language identification tools developed will be used to contribute to future enhancement of over 4 million legacy records.
    Date
    2. 3.2020 19:04:22
  12. Lezius, W.; Rapp, R.; Wettler, M.: ¬A morphology-system and part-of-speech tagger for German (1996) 0.03
    0.034702852 = product of:
      0.05205428 = sum of:
        0.018771894 = weight(_text_:of in 1693) [ClassicSimilarity], result of:
          0.018771894 = score(doc=1693,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.24433708 = fieldWeight in 1693, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=1693)
        0.033282384 = product of:
          0.06656477 = sum of:
            0.06656477 = weight(_text_:22 in 1693) [ClassicSimilarity], result of:
              0.06656477 = score(doc=1693,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.38690117 = fieldWeight in 1693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1693)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 3.2015 9:37:18
    Source
    Natural language processing and speech technology: Results of the 3rd KONVENS Conference, Bielefeld, October 1996. Ed.: D. Gibbon
  13. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.03
    0.033580456 = product of:
      0.050370682 = sum of:
        0.023744777 = weight(_text_:of in 6753) [ClassicSimilarity], result of:
          0.023744777 = score(doc=6753,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.3090647 = fieldWeight in 6753, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6753)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.05325181 = score(doc=6753,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes and evaluates the results of a large scale lexical learning system, ARISTO-LEX, that uses a combination of probabilisitc and knowledge based methods for the acquisition of selectional restrictions of words in sublanguages. Presents experimental data obtained from different corpora in different doamins and languages, and shows that the acquired lexical data not only have practical applications in natural language processing, but they are useful for a comparative analysis of sublanguages
    Date
    6. 3.1997 16:22:15
  14. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.03
    0.033580456 = product of:
      0.050370682 = sum of:
        0.023744777 = weight(_text_:of in 7415) [ClassicSimilarity], result of:
          0.023744777 = score(doc=7415,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.3090647 = fieldWeight in 7415, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7415)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.05325181 = score(doc=7415,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
    Source
    Annual review of information science and technology. 31(1996), S.83-119
  15. Liddy, E.D.: Natural language processing for information retrieval and knowledge discovery (1998) 0.03
    0.033052213 = product of:
      0.049578317 = sum of:
        0.02628065 = weight(_text_:of in 2345) [ClassicSimilarity], result of:
          0.02628065 = score(doc=2345,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34207192 = fieldWeight in 2345, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2345)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 2345) [ClassicSimilarity], result of:
              0.046595335 = score(doc=2345,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 2345, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2345)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Natural language processing (NLP) is a powerful technology for the vital tasks of information retrieval (IR) and knowledge discovery (KD) which, in turn, feed the visualization systems of the present and future and enable knowledge workers to focus more of their time on the vital tasks of analysis and prediction
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  16. Kay, M.: ¬The proper place of men and machines in language translation (1997) 0.03
    0.031920645 = product of:
      0.047880966 = sum of:
        0.024583299 = weight(_text_:of in 1178) [ClassicSimilarity], result of:
          0.024583299 = score(doc=1178,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.31997898 = fieldWeight in 1178, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1178)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.046595335 = score(doc=1178,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Machine translation stands no chance of filling actual needs for translation because, although there has been progress in relevant areas of computer science, advance in linguistics have not touched the core problems. Cooperative man-machine systems need to be developed, Proposes a translator's amanuensis, incorporating into a word processor some simple facilities peculiar to translation. Gradual enhancements of such a system could lead to the original goal of machine translation
    Content
    Reprint of a Xerox PARC Working Paper which appeared in 1980
    Date
    31. 7.1996 9:22:19
    Footnote
    Contribution to a special issue devoted to the theme of new tools for human translators
  17. Hammwöhner, R.: TransRouter revisited : Decision support in the routing of translation projects (2000) 0.03
    0.031920645 = product of:
      0.047880966 = sum of:
        0.024583299 = weight(_text_:of in 5483) [ClassicSimilarity], result of:
          0.024583299 = score(doc=5483,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.31997898 = fieldWeight in 5483, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5483)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 5483) [ClassicSimilarity], result of:
              0.046595335 = score(doc=5483,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper gives an outline of the final results of the TransRouter project. In the scope of this project a decision support system for translation managers has been developed, which will support the selection of appropriate routes for translation projects. In this paper emphasis is put on the decision model, which is based on a stepwise refined assessment of translation routes. The workflow of using this system is considered as well
    Date
    10.12.2000 18:22:35
  18. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.03
    0.031642508 = product of:
      0.04746376 = sum of:
        0.023929562 = weight(_text_:of in 2541) [ClassicSimilarity], result of:
          0.023929562 = score(doc=2541,freq=26.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.31146988 = fieldWeight in 2541, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.023534197 = product of:
          0.047068395 = sum of:
            0.047068395 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.047068395 = score(doc=2541,freq=4.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  19. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.03
    0.03070492 = product of:
      0.046057377 = sum of:
        0.02275971 = weight(_text_:of in 156) [ClassicSimilarity], result of:
          0.02275971 = score(doc=156,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.29624295 = fieldWeight in 156, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
        0.023297668 = product of:
          0.046595335 = sum of:
            0.046595335 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.046595335 = score(doc=156,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The present study investigates the ability of a bibliometric based semi-automatic method to select candidate thesaurus terms from citation contexts. The method consists of document co-citation analysis, citation context analysis, and noun phrase parsing. The investigation is carried out within the specialty area of periodontology. The results clearly demonstrate that the method is able to select important candidate thesaurus terms within the chosen specialty area.
    Date
    8. 3.2007 19:55:22
    Source
    Context: nature, impact and role. 5th International Conference an Conceptions of Library and Information Sciences, CoLIS 2005 Glasgow, UK, June 2005. Ed. by F. Crestani u. I. Ruthven
  20. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.03
    0.030012354 = product of:
      0.04501853 = sum of:
        0.018392624 = weight(_text_:of in 8521) [ClassicSimilarity], result of:
          0.018392624 = score(doc=8521,freq=6.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.23940048 = fieldWeight in 8521, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=8521)
        0.026625905 = product of:
          0.05325181 = sum of:
            0.05325181 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.05325181 = score(doc=8521,freq=2.0), product of:
                0.17204592 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049130294 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Presents the state of the art in lexical choice research in text generation and machine translation. Discusses the existing implementations with respect to: the place of lexical choice in the overall generation rates; the information flow within the generation process and the consequences thereof for lexical choice; the internal organization of the lexical choice process; and the phenomena covered by lexical choice. Identifies possible future directions in lexical choice research
    Date
    31. 7.1996 9:22:19

Languages

Types

  • a 457
  • el 56
  • m 38
  • s 20
  • x 12
  • p 7
  • b 1
  • d 1
  • n 1
  • r 1
  • More… Less…

Subjects

Classifications