Search (3 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × theme_ss:"Konzeption und Anwendung des Prinzips Thesaurus"
  1. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.02
    0.020708349 = product of:
      0.041416697 = sum of:
        0.041416697 = product of:
          0.082833394 = sum of:
            0.082833394 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.082833394 = score(doc=4483,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    15. 3.2000 10:22:37
  2. Tseng, Y.-H.: Automatic thesaurus generation for Chinese documents (2002) 0.02
    0.01933244 = product of:
      0.03866488 = sum of:
        0.03866488 = product of:
          0.07732976 = sum of:
            0.07732976 = weight(_text_:news in 5226) [ClassicSimilarity], result of:
              0.07732976 = score(doc=5226,freq=2.0), product of:
                0.26705483 = queryWeight, product of:
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.05094824 = queryNorm
                0.28956512 = fieldWeight in 5226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2416887 = idf(docFreq=635, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5226)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Tseng constructs a word co-occurrence based thesaurus by means of the automatic analysis of Chinese text. Words are identified by a longest dictionary match supplemented by a key word extraction algorithm that merges back nearby tokens and accepts shorter strings of characters if they occur more often than the longest string. Single character auxiliary words are a major source of error but this can be greatly reduced with the use of a 70-character 2680 word stop list. Extracted terms with their associate document weights are sorted by decreasing frequency and the top of this list is associated using a Dice coefficient modified to account for longer documents on the weights of term pairs. Co-occurrence is not in the document as a whole but in paragraph or sentence size sections in order to reduce computation time. A window of 29 characters or 11 words was found to be sufficient. A thesaurus was produced from 25,230 Chinese news articles and judges asked to review the top 50 terms associated with each of 30 single word query terms. They determined 69% to be relevant.
  3. Schneider, J.W.; Borlund, P.: ¬A bibliometric-based semiautomatic approach to identification of candidate thesaurus terms : parsing and filtering of noun phrases from citation contexts (2005) 0.01
    0.01207987 = product of:
      0.02415974 = sum of:
        0.02415974 = product of:
          0.04831948 = sum of:
            0.04831948 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
              0.04831948 = score(doc=156,freq=2.0), product of:
                0.17841205 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05094824 = queryNorm
                0.2708308 = fieldWeight in 156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8. 3.2007 19:55:22