Search (37 results, page 1 of 2)

  • × language_ss:"e"
  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.10034672 = sum of:
      0.079899386 = product of:
        0.23969816 = sum of:
          0.23969816 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23969816 = score(doc=562,freq=2.0), product of:
              0.4264955 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050306078 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020447336 = product of:
        0.040894672 = sum of:
          0.040894672 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.040894672 = score(doc=562,freq=2.0), product of:
              0.17616332 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050306078 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.04
    0.039949693 = product of:
      0.079899386 = sum of:
        0.079899386 = product of:
          0.23969816 = sum of:
            0.23969816 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.23969816 = score(doc=862,freq=2.0), product of:
                0.4264955 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050306078 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Warner, A.J.: Natural language processing (1987) 0.03
    0.027263116 = product of:
      0.054526232 = sum of:
        0.054526232 = product of:
          0.109052464 = sum of:
            0.109052464 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.109052464 = score(doc=337,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  4. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.02
    0.023855226 = product of:
      0.047710452 = sum of:
        0.047710452 = product of:
          0.095420904 = sum of:
            0.095420904 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.095420904 = score(doc=3164,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  5. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.02
    0.023855226 = product of:
      0.047710452 = sum of:
        0.047710452 = product of:
          0.095420904 = sum of:
            0.095420904 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.095420904 = score(doc=4506,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 11:52:22
  6. Somers, H.: Example-based machine translation : Review article (1999) 0.02
    0.023855226 = product of:
      0.047710452 = sum of:
        0.047710452 = product of:
          0.095420904 = sum of:
            0.095420904 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.095420904 = score(doc=6672,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  7. New tools for human translators (1997) 0.02
    0.023855226 = product of:
      0.047710452 = sum of:
        0.047710452 = product of:
          0.095420904 = sum of:
            0.095420904 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.095420904 = score(doc=1179,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  8. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.02
    0.023855226 = product of:
      0.047710452 = sum of:
        0.047710452 = product of:
          0.095420904 = sum of:
            0.095420904 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.095420904 = score(doc=3117,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  9. Kim, W.; Wilbur, W.J.: Corpus-based statistical screening for content-bearing terms (2001) 0.02
    0.02077255 = product of:
      0.0415451 = sum of:
        0.0415451 = product of:
          0.0830902 = sum of:
            0.0830902 = weight(_text_:500 in 5188) [ClassicSimilarity], result of:
              0.0830902 = score(doc=5188,freq=2.0), product of:
                0.3075407 = queryWeight, product of:
                  6.113391 = idf(docFreq=265, maxDocs=44218)
                  0.050306078 = queryNorm
                0.27017626 = fieldWeight in 5188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.113391 = idf(docFreq=265, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Kim and Wilber present three techniques for the algorithmic identification in text of content bearing terms and phrases intended for human use as entry points or hyperlinks. Using a set of 1,075 terms from MEDLINE evaluated on a zero to four, stop word to definite content word scale, they evaluate the ranked lists of their three methods based on their placement of content words in the top ranks. Data consist of the natural language elements of 304,057 MEDLINE records from 1996, and 173,252 Wall Street Journal records from the TIPSTER collection. Phrases are extracted by breaking at punctuation marks and stop words, normalized by lower casing, replacement of nonalphanumerics with spaces, and the reduction of multiple spaces. In the ``strength of context'' approach each document is a vector of binary values for each word or word pair. The words or word pairs are removed from all documents, and the Robertson, Spark Jones relevance weight for each term computed, negative weights replaced with zero, those below a randomness threshold ignored, and the remainder summed for each document, to yield a score for the document and finally to assign to the term the average document score for documents in which it occurred. The average of these word scores is assigned to the original phrase. The ``frequency clumping'' approach defines a random phrase as one whose distribution among documents is Poisson in character. A pvalue, the probability that a phrase frequency of occurrence would be equal to, or less than, Poisson expectations is computed, and a score assigned which is the negative log of that value. In the ``database comparison'' approach if a phrase occurring in a document allows prediction that the document is in MEDLINE rather that in the Wall Street Journal, it is considered to be content bearing for MEDLINE. The score is computed by dividing the number of occurrences of the term in MEDLINE by occurrences in the Journal, and taking the product of all these values. The one hundred top and bottom ranked phrases that occurred in at least 500 documents were collected for each method. The union set had 476 phrases. A second selection was made of two word phrases occurring each in only three documents with a union of 599 phrases. A judge then ranked the two sets of terms as to subject specificity on a 0 to 4 scale. Precision was the average subject specificity of the first r ranks and recall the fraction of the subject specific phrases in the first r ranks and eleven point average precision was used as a summary measure. The three methods all move content bearing terms forward in the lists as does the use of the sum of the logs of the three methods.
  10. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.020447336 = product of:
      0.040894672 = sum of:
        0.040894672 = product of:
          0.081789345 = sum of:
            0.081789345 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.081789345 = score(doc=4483,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  11. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.02
    0.020447336 = product of:
      0.040894672 = sum of:
        0.040894672 = product of:
          0.081789345 = sum of:
            0.081789345 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.081789345 = score(doc=4888,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 3.2013 14:56:22
  12. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.02
    0.017039448 = product of:
      0.034078896 = sum of:
        0.034078896 = product of:
          0.06815779 = sum of:
            0.06815779 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.06815779 = score(doc=1463,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  13. Wanner, L.: Lexical choice in text generation and machine translation (1996) 0.01
    0.013631558 = product of:
      0.027263116 = sum of:
        0.027263116 = product of:
          0.054526232 = sum of:
            0.054526232 = weight(_text_:22 in 8521) [ClassicSimilarity], result of:
              0.054526232 = score(doc=8521,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.30952093 = fieldWeight in 8521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=8521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  14. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.01
    0.013631558 = product of:
      0.027263116 = sum of:
        0.027263116 = product of:
          0.054526232 = sum of:
            0.054526232 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.054526232 = score(doc=6752,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
  15. Basili, R.; Pazienza, M.T.; Velardi, P.: ¬An empirical symbolic approach to natural language processing (1996) 0.01
    0.013631558 = product of:
      0.027263116 = sum of:
        0.027263116 = product of:
          0.054526232 = sum of:
            0.054526232 = weight(_text_:22 in 6753) [ClassicSimilarity], result of:
              0.054526232 = score(doc=6753,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.30952093 = fieldWeight in 6753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 3.1997 16:22:15
  16. Haas, S.W.: Natural language processing : toward large-scale, robust systems (1996) 0.01
    0.013631558 = product of:
      0.027263116 = sum of:
        0.027263116 = product of:
          0.054526232 = sum of:
            0.054526232 = weight(_text_:22 in 7415) [ClassicSimilarity], result of:
              0.054526232 = score(doc=7415,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.30952093 = fieldWeight in 7415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7415)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review of natural language processing updating an earlier review published in ARIST 22(1987). Discusses important developments that have allowed for significant advances in the field of natural language processing: materials and resources; knowledge based systems and statistical approaches; and a strong emphasis on evaluation. Reviews some natural language processing applications and common problems still awaiting solution. Considers closely related applications such as language generation and th egeneration phase of machine translation which face the same problems as natural language processing. Covers natural language methodologies for information retrieval only briefly
  17. Way, E.C.: Knowledge representation and metaphor (oder: meaning) (1994) 0.01
    0.013631558 = product of:
      0.027263116 = sum of:
        0.027263116 = product of:
          0.054526232 = sum of:
            0.054526232 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
              0.054526232 = score(doc=771,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.30952093 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=771)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Bereits 1991 bei Kluwer publiziert // Rez. in: Knowledge organization 22(1995) no.1, S.48-49 (O. Sechser)
  18. Morris, V.: Automated language identification of bibliographic resources (2020) 0.01
    0.013631558 = product of:
      0.027263116 = sum of:
        0.027263116 = product of:
          0.054526232 = sum of:
            0.054526232 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.054526232 = score(doc=5749,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 19:04:22
  19. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.01
    0.012048709 = product of:
      0.024097418 = sum of:
        0.024097418 = product of:
          0.048194837 = sum of:
            0.048194837 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.048194837 = score(doc=2541,freq=4.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  20. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.01
    0.011927613 = product of:
      0.023855226 = sum of:
        0.023855226 = product of:
          0.047710452 = sum of:
            0.047710452 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
              0.047710452 = score(doc=1361,freq=2.0), product of:
                0.17616332 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050306078 = queryNorm
                0.2708308 = fieldWeight in 1361, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1361)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    6. 1.1999 10:22:07