Search (4 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  1. Pimenov, E.N.: Normativnost' i nekotorye problem razrabotki tezauruzov i drugikh lingvistiicheskikh sredstv IPS (2000) 0.02
    0.015034199 = product of:
      0.045102596 = sum of:
        0.045102596 = product of:
          0.13530779 = sum of:
            0.13530779 = weight(_text_:2000 in 3281) [ClassicSimilarity], result of:
              0.13530779 = score(doc=3281,freq=5.0), product of:
                0.19113071 = queryWeight, product of:
                  4.0524464 = idf(docFreq=2088, maxDocs=44218)
                  0.04716428 = queryNorm
                0.70793325 = fieldWeight in 3281, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.0524464 = idf(docFreq=2088, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3281)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    Nauchno- Tekhnicheskaya Informatsiya; Series 1. 2000, no.5, S.7-16
    Year
    2000
  2. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.008520145 = product of:
      0.025560435 = sum of:
        0.025560435 = product of:
          0.0766813 = sum of:
            0.0766813 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.0766813 = score(doc=4483,freq=2.0), product of:
                0.16516127 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04716428 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    15. 3.2000 10:22:37
  3. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.00
    0.004970085 = product of:
      0.014910254 = sum of:
        0.014910254 = product of:
          0.04473076 = sum of:
            0.04473076 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04473076 = score(doc=156,freq=2.0), product of:
                0.16516127 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04716428 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    8. 3.2007 19:55:22
  4. Tseng, Y.-H.: Automatic thesaurus generation for Chinese documents (2002) 0.00
    0.0035822857 = product of:
      0.010746857 = sum of:
        0.010746857 = product of:
          0.03224057 = sum of:
            0.03224057 = weight(_text_:29 in 5226) [ClassicSimilarity], result of:
              0.03224057 = score(doc=5226,freq=2.0), product of:
                0.16590919 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04716428 = queryNorm
                0.19432661 = fieldWeight in 5226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Tseng constructs a word co-occurrence based thesaurus by means of the automatic analysis of Chinese text. Words are identified by a longest dictionary match supplemented by a key word extraction algorithm that merges back nearby tokens and accepts shorter strings of characters if they occur more often than the longest string. Single character auxiliary words are a major source of error but this can be greatly reduced with the use of a 70-character 2680 word stop list. Extracted terms with their associate document weights are sorted by decreasing frequency and the top of this list is associated using a Dice coefficient modified to account for longer documents on the weights of term pairs. Co-occurrence is not in the document as a whole but in paragraph or sentence size sections in order to reduce computation time. A window of 29 characters or 11 words was found to be sufficient. A thesaurus was produced from 25,230 Chinese news articles and judges asked to review the top 50 terms associated with each of 30 single word query terms. They determined 69% to be relevant.