Search (4 results, page 1 of 1)

  • × author_ss:"Li, W."
  • × year_i:[2010 TO 2020}
  1. Wei, F.; Li, W.; Liu, S.: iRANK: a rank-learn-combine framework for unsupervised ensemble ranking (2010) 0.03
    0.03068387 = sum of:
      0.0138822645 = product of:
        0.08329359 = sum of:
          0.08329359 = weight(_text_:authors in 3472) [ClassicSimilarity], result of:
            0.08329359 = score(doc=3472,freq=6.0), product of:
              0.1909519 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04188631 = queryNorm
              0.43620193 = fieldWeight in 3472, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3472)
        0.16666667 = coord(1/6)
      0.016801605 = product of:
        0.03360321 = sum of:
          0.03360321 = weight(_text_:w in 3472) [ClassicSimilarity], result of:
            0.03360321 = score(doc=3472,freq=2.0), product of:
              0.1596206 = queryWeight, product of:
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.04188631 = queryNorm
              0.21051927 = fieldWeight in 3472, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3472)
        0.5 = coord(1/2)
    
    Abstract
    The authors address the problem of unsupervised ensemble ranking. Traditional approaches either combine multiple ranking criteria into a unified representation to obtain an overall ranking score or to utilize certain rank fusion or aggregation techniques to combine the ranking results. Beyond the aforementioned combine-then-rank and rank-then-combine approaches, the authors propose a novel rank-learn-combine ranking framework, called Interactive Ranking (iRANK), which allows two base rankers to teach each other before combination during the ranking process by providing their own ranking results as feedback to the others to boost the ranking performance. This mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. The authors further design two ranking refinement strategies to efficiently and effectively use the feedback based on reasonable assumptions and rational analysis. Although iRANK is applicable to many applications, as a case study, they apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 and 2006 data sets. The results are encouraging with consistent and promising improvements.
  2. Ouyang, Y.; Li, W.; Li, S.; Lu, Q.: Intertopic information mining for query-based summarization (2010) 0.03
    0.028136425 = sum of:
      0.011334821 = product of:
        0.06800892 = sum of:
          0.06800892 = weight(_text_:authors in 3459) [ClassicSimilarity], result of:
            0.06800892 = score(doc=3459,freq=4.0), product of:
              0.1909519 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04188631 = queryNorm
              0.35615736 = fieldWeight in 3459, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3459)
        0.16666667 = coord(1/6)
      0.016801605 = product of:
        0.03360321 = sum of:
          0.03360321 = weight(_text_:w in 3459) [ClassicSimilarity], result of:
            0.03360321 = score(doc=3459,freq=2.0), product of:
              0.1596206 = queryWeight, product of:
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.04188631 = queryNorm
              0.21051927 = fieldWeight in 3459, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8108058 = idf(docFreq=2659, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3459)
        0.5 = coord(1/2)
    
    Abstract
    In this article, the authors address the problem of sentence ranking in summarization. Although most existing summarization approaches are concerned with the information embodied in a particular topic (including a set of documents and an associated query) for sentence ranking, they propose a novel ranking approach that incorporates intertopic information mining. Intertopic information, in contrast to intratopic information, is able to reveal pairwise topic relationships and thus can be considered as the bridge across different topics. In this article, the intertopic information is used for transferring word importance learned from known topics to unknown topics under a learning-based summarization framework. To mine this information, the authors model the topic relationship by clustering all the words in both known and unknown topics according to various kinds of word conceptual labels, which indicate the roles of the words in the topic. Based on the mined relationships, we develop a probabilistic model using manually generated summaries provided for known topics to predict ranking scores for sentences in unknown topics. A series of experiments have been conducted on the Document Understanding Conference (DUC) 2006 data set. The evaluation results show that intertopic information is indeed effective for sentence ranking and the resultant summarization system performs comparably well to the best-performing DUC participating systems on the same data set.
  3. Cai, X.; Li, W.: Enhancing sentence-level clustering with integrated and interactive frameworks for theme-based summarization (2011) 0.01
    0.0084008025 = product of:
      0.016801605 = sum of:
        0.016801605 = product of:
          0.03360321 = sum of:
            0.03360321 = weight(_text_:w in 4770) [ClassicSimilarity], result of:
              0.03360321 = score(doc=4770,freq=2.0), product of:
                0.1596206 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.04188631 = queryNorm
                0.21051927 = fieldWeight in 4770, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4770)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Liu, Y.; Li, W.; Huang, Z.; Fang, Q.: ¬A fast method based on multiple clustering for name disambiguation in bibliographic citations (2015) 0.01
    0.0084008025 = product of:
      0.016801605 = sum of:
        0.016801605 = product of:
          0.03360321 = sum of:
            0.03360321 = weight(_text_:w in 1672) [ClassicSimilarity], result of:
              0.03360321 = score(doc=1672,freq=2.0), product of:
                0.1596206 = queryWeight, product of:
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.04188631 = queryNorm
                0.21051927 = fieldWeight in 1672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8108058 = idf(docFreq=2659, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)