Search (6 results, page 1 of 1)

  • × author_ss:"Li, W."
  1. Ouyang, Y.; Li, W.; Li, S.; Lu, Q.: Intertopic information mining for query-based summarization (2010) 0.08
    0.0798402 = sum of:
      0.020373322 = product of:
        0.08149329 = sum of:
          0.08149329 = weight(_text_:authors in 3459) [ClassicSimilarity], result of:
            0.08149329 = score(doc=3459,freq=4.0), product of:
              0.22881259 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.050191253 = queryNorm
              0.35615736 = fieldWeight in 3459, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3459)
        0.25 = coord(1/4)
      0.059466876 = product of:
        0.11893375 = sum of:
          0.11893375 = weight(_text_:q in 3459) [ClassicSimilarity], result of:
            0.11893375 = score(doc=3459,freq=2.0), product of:
              0.32872224 = queryWeight, product of:
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.050191253 = queryNorm
              0.3618062 = fieldWeight in 3459, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5493927 = idf(docFreq=171, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3459)
        0.5 = coord(1/2)
    
    Abstract
    In this article, the authors address the problem of sentence ranking in summarization. Although most existing summarization approaches are concerned with the information embodied in a particular topic (including a set of documents and an associated query) for sentence ranking, they propose a novel ranking approach that incorporates intertopic information mining. Intertopic information, in contrast to intratopic information, is able to reveal pairwise topic relationships and thus can be considered as the bridge across different topics. In this article, the intertopic information is used for transferring word importance learned from known topics to unknown topics under a learning-based summarization framework. To mine this information, the authors model the topic relationship by clustering all the words in both known and unknown topics according to various kinds of word conceptual labels, which indicate the roles of the words in the topic. Based on the mined relationships, we develop a probabilistic model using manually generated summaries provided for known topics to predict ranking scores for sentences in unknown topics. A series of experiments have been conducted on the Document Understanding Conference (DUC) 2006 data set. The evaluation results show that intertopic information is indeed effective for sentence ranking and the resultant summarization system performs comparably well to the best-performing DUC participating systems on the same data set.
  2. Li, W.; Wong, K.-F.; Yuan, C.: Toward automatic Chinese temporal information extraction (2001) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 6029) [ClassicSimilarity], result of:
              0.11893375 = score(doc=6029,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 6029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6029)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Over the past few years, temporal information processing and temporal database management have increasingly become hot topics. Nevertheless, only a few researchers have investigated these areas in the Chinese language. This lays down the objective of our research: to exploit Chinese language processing techniques for temporal information extraction and concept reasoning. In this article, we first study the mechanism for expressing time in Chinese. On the basis of the study, we then design a general frame structure for maintaining the extracted temporal concepts and propose a system for extracting time-dependent information from Hong Kong financial news. In the system, temporal knowledge is represented by different types of temporal concepts (TTC) and different temporal relations, including absolute and relative relations, which are used to correlate between action times and reference times. In analyzing a sentence, the algorithm first determines the situation related to the verb. This in turn will identify the type of temporal concept associated with the verb. After that, the relevant temporal information is extracted and the temporal relations are derived. These relations link relevant concept frames together in chronological order, which in turn provide the knowledge to fulfill users' queries, e.g., for question-answering (i.e., Q&A) applications
  3. Wei, F.; Li, W.; Lu, Q.; He, Y.: Applying two-level reinforcement ranking in query-oriented multidocument summarization (2009) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 3120) [ClassicSimilarity], result of:
              0.11893375 = score(doc=3120,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 3120, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3120)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Liu, Y.; Li, W.; Huang, Z.; Fang, Q.: ¬A fast method based on multiple clustering for name disambiguation in bibliographic citations (2015) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 1672) [ClassicSimilarity], result of:
              0.11893375 = score(doc=1672,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 1672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1672)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Xiang, R.; Chersoni, E.; Lu, Q.; Huang, C.-R.; Li, W.; Long, Y.: Lexical data augmentation for sentiment analysis (2021) 0.03
    0.029733438 = product of:
      0.059466876 = sum of:
        0.059466876 = product of:
          0.11893375 = sum of:
            0.11893375 = weight(_text_:q in 392) [ClassicSimilarity], result of:
              0.11893375 = score(doc=392,freq=2.0), product of:
                0.32872224 = queryWeight, product of:
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.050191253 = queryNorm
                0.3618062 = fieldWeight in 392, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5493927 = idf(docFreq=171, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=392)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Wei, F.; Li, W.; Liu, S.: iRANK: a rank-learn-combine framework for unsupervised ensemble ranking (2010) 0.01
    0.0124760615 = product of:
      0.024952123 = sum of:
        0.024952123 = product of:
          0.09980849 = sum of:
            0.09980849 = weight(_text_:authors in 3472) [ClassicSimilarity], result of:
              0.09980849 = score(doc=3472,freq=6.0), product of:
                0.22881259 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.050191253 = queryNorm
                0.43620193 = fieldWeight in 3472, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3472)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The authors address the problem of unsupervised ensemble ranking. Traditional approaches either combine multiple ranking criteria into a unified representation to obtain an overall ranking score or to utilize certain rank fusion or aggregation techniques to combine the ranking results. Beyond the aforementioned combine-then-rank and rank-then-combine approaches, the authors propose a novel rank-learn-combine ranking framework, called Interactive Ranking (iRANK), which allows two base rankers to teach each other before combination during the ranking process by providing their own ranking results as feedback to the others to boost the ranking performance. This mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. The authors further design two ranking refinement strategies to efficiently and effectively use the feedback based on reasonable assumptions and rational analysis. Although iRANK is applicable to many applications, as a case study, they apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 and 2006 data sets. The results are encouraging with consistent and promising improvements.