Search (3 results, page 1 of 1)

  • × author_ss:"Kishida, K."
  1. Kishida, K.: High-speed rough clustering for very large document collections (2010) 0.00
    0.0019582848 = product of:
      0.0039165695 = sum of:
        0.0039165695 = product of:
          0.007833139 = sum of:
            0.007833139 = weight(_text_:a in 3463) [ClassicSimilarity], result of:
              0.007833139 = score(doc=3463,freq=16.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.18016359 = fieldWeight in 3463, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Document clustering is an important tool, but it is not yet widely used in practice probably because of its high computational complexity. This article explores techniques of high-speed rough clustering of documents, assuming that it is sometimes necessary to obtain a clustering result in a shorter time, although the result is just an approximate outline of document clusters. A promising approach for such clustering is to reduce the number of documents to be checked for generating cluster vectors in the leader-follower clustering algorithm. Based on this idea, the present article proposes a modified Crouch algorithm and incomplete single-pass leader-follower algorithm. Also, a two-stage grouping technique, in which the first stage attempts to decrease the number of documents to be processed in the second stage by applying a quick merging technique, is developed. An experiment using a part of the Reuters corpus RCV1 showed empirically that both the modified Crouch and the incomplete single-pass leader-follower algorithms achieve clustering results more efficiently than the original methods, and also improved the effectiveness of clustering results. On the other hand, the two-stage grouping technique did not reduce the processing time in this experiment.
    Type
    a
  2. Kishida, K.: Term disambiguation techniques based on target document collection for cross-language information retrieval : an empirical comparison of performance between techniques (2007) 0.00
    0.0016959244 = product of:
      0.0033918489 = sum of:
        0.0033918489 = product of:
          0.0067836978 = sum of:
            0.0067836978 = weight(_text_:a in 897) [ClassicSimilarity], result of:
              0.0067836978 = score(doc=897,freq=12.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.15602624 = fieldWeight in 897, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=897)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Dictionary-based query translation for cross-language information retrieval often yields various translation candidates having different meanings for a source term in the query. This paper examines methods for solving the ambiguity of translations based on only the target document collections. First, we discuss two kinds of disambiguation technique: (1) one is a method using term co-occurrence statistics in the collection, and (2) a technique based on pseudo-relevance feedback. Next, these techniques are empirically compared using the CLEF 2003 test collection for German to Italian bilingual searches, which are executed by using English language as a pivot. The experiments showed that a variation of term co-occurrence based techniques, in which the best sequence algorithm for selecting translations is used with the Cosine coefficient, is dominant, and that the PRF method shows comparable high search performance, although statistical tests did not sufficiently support these conclusions. Furthermore, we repeat the same experiments for the case of French to Italian (pivot) and English to Italian (non-pivot) searches on the same CLEF 2003 test collection in order to verity our findings. Again, similar results were observed except that the Dice coefficient outperforms slightly the Cosine coefficient in the case of disambiguation based on term co-occurrence for English to Italian searches.
    Type
    a
  3. Kishida, K.: Technical issues of cross-language information retrieval : a review (2005) 0.00
    0.0015666279 = product of:
      0.0031332558 = sum of:
        0.0031332558 = product of:
          0.0062665115 = sum of:
            0.0062665115 = weight(_text_:a in 1019) [ClassicSimilarity], result of:
              0.0062665115 = score(doc=1019,freq=4.0), product of:
                0.043477926 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.037706986 = queryNorm
                0.14413087 = fieldWeight in 1019, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1019)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a