Search (5 results, page 1 of 1)

  • × author_ss:"Ko, Y."
  • × type_ss:"a"
  1. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.01
    0.007581752 = product of:
      0.05307226 = sum of:
        0.01712272 = weight(_text_:information in 2339) [ClassicSimilarity], result of:
          0.01712272 = score(doc=2339,freq=16.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3291521 = fieldWeight in 2339, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
        0.03594954 = weight(_text_:retrieval in 2339) [ClassicSimilarity], result of:
          0.03594954 = score(doc=2339,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 2339, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2339)
      0.14285715 = coord(2/14)
    
    Abstract
    Text classification (TC) is a core technique for text mining and information retrieval. It has been applied to many applications in many different research and industrial areas. Term-weighting schemes assign an appropriate weight to each term to obtain a high TC performance. Although term weighting is one of the important modules for TC and TC has different peculiarities from those in information retrieval, many term-weighting schemes used in information retrieval, such as term frequency-inverse document frequency (tf-idf), have been used in TC in the same manner. The peculiarity of TC that differs most from information retrieval is the existence of class information. This article proposes a new term-weighting scheme that uses class information using positive and negative class distributions. As a result, the proposed scheme, log tf-TRR, consistently performs better than do other schemes using class information as well as traditional schemes such as tf-idf.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.12, S.2553-2565
  2. Kim, S.; Ko, Y.; Oard, D.W.: Combining lexical and statistical translation evidence for cross-language information retrieval (2015) 0.01
    0.005361108 = product of:
      0.037527755 = sum of:
        0.012107591 = weight(_text_:information in 1606) [ClassicSimilarity], result of:
          0.012107591 = score(doc=1606,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 1606, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1606)
        0.025420163 = weight(_text_:retrieval in 1606) [ClassicSimilarity], result of:
          0.025420163 = score(doc=1606,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.2835858 = fieldWeight in 1606, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1606)
      0.14285715 = coord(2/14)
    
    Abstract
    This article explores how best to use lexical and statistical translation evidence together for cross-language information retrieval (CLIR). Lexical translation evidence is assembled from Wikipedia and from a large machine-readable dictionary, statistical translation evidence is drawn from parallel corpora, and evidence from co-occurrence in the document language provides a basis for limiting the adverse effect of translation ambiguity. Coverage statistics for NII Testbeds and Community for Information Access Research (NTCIR) queries confirm that these resources have complementary strengths. Experiments with translation evidence from a small parallel corpus indicate that even rather rough estimates of translation probabilities can yield further improvements over a strong technique for translation weighting based on using Jensen-Shannon divergence as a term-association measure. Finally, a novel approach to posttranslation query expansion using a random walk over the Wikipedia concept link graph is shown to yield further improvements over alternative techniques for posttranslation query expansion. Evaluation results on the NTCIR-5 English-Korean test collection show statistically significant improvements over strong baselines.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.23-39
  3. Bae, K.; Ko, Y.: Improving question retrieval in community question answering service using dependency relations and question classification (2019) 0.01
    0.0052989167 = product of:
      0.037092414 = sum of:
        0.0071344664 = weight(_text_:information in 5412) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=5412,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 5412, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5412)
        0.029957948 = weight(_text_:retrieval in 5412) [ClassicSimilarity], result of:
          0.029957948 = score(doc=5412,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.33420905 = fieldWeight in 5412, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5412)
      0.14285715 = coord(2/14)
    
    Abstract
    To build an effective community question answering (cQA) service, determining ways to obtain questions similar to an input query question is a significant research issue. The major challenges for question retrieval in cQA are related to solving the lexical gap problem and estimating the relevance between questions. In this study, we first solve the lexical gap problem using a translation-based language model (TRLM). Thereafter, we determine features and methods that are competent for estimating the relevance between two questions. For this purpose, we explore ways to use the results of a dependency parser and question classification for category information. Head-dependent pairs are first extracted as bigram features, called dependency bigrams, from the analysis results of the dependency parser. The probability of each category is estimated using the softmax approach based on the scores of the classification results. Subsequently, we propose two retrieval models-the dependency-based model (DM) and category-based model (CM)-and they are applied to the previous model, TRLM. The experimental results demonstrate that the proposed methods significantly improve the performance of question retrieval in cQA services.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.11, S.1194-1209
  4. Ko, Y.; Park, J.; Seo, J.: Improving text categorization using the importance of sentences (2004) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 2557) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=2557,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 2557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2557)
      0.071428575 = coord(1/14)
    
    Source
    Information processing and management. 40(2004) no.1, S.65-79
  5. Ko, Y.; Seo, J.: Text classification from unlabeled documents with bootstrapping and feature projection techniques (2009) 0.00
    4.32414E-4 = product of:
      0.0060537956 = sum of:
        0.0060537956 = weight(_text_:information in 2452) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=2452,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 2452, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2452)
      0.071428575 = coord(1/14)
    
    Source
    Information processing and management. 45(2009) no.1, S.70-83