Search (5 results, page 1 of 1)

  • × author_ss:"Wan, X."
  1. Wan, X.; Liu, F.: Are all literature citations equally important? : automatic citation strength estimation and its applications (2014) 0.02
    0.024089992 = product of:
      0.048179984 = sum of:
        0.048179984 = sum of:
          0.010739701 = weight(_text_:a in 1350) [ClassicSimilarity], result of:
            0.010739701 = score(doc=1350,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20223314 = fieldWeight in 1350, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=1350)
          0.037440285 = weight(_text_:22 in 1350) [ClassicSimilarity], result of:
            0.037440285 = score(doc=1350,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 1350, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1350)
      0.5 = coord(1/2)
    
    Abstract
    Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.
    Date
    22. 8.2014 17:12:35
    Type
    a
  2. Wan, X.; Yang, J.; Xiao, J.: Incorporating cross-document relationships between sentences for single document summarizations (2006) 0.02
    0.022235535 = product of:
      0.04447107 = sum of:
        0.04447107 = sum of:
          0.007030784 = weight(_text_:a in 2421) [ClassicSimilarity], result of:
            0.007030784 = score(doc=2421,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.13239266 = fieldWeight in 2421, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2421)
          0.037440285 = weight(_text_:22 in 2421) [ClassicSimilarity], result of:
            0.037440285 = score(doc=2421,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 2421, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2421)
      0.5 = coord(1/2)
    
    Abstract
    Graph-based ranking algorithms have recently been proposed for single document summarizations and such algorithms evaluate the importance of a sentence by making use of the relationships between sentences in the document in a recursive way. In this paper, we investigate using other related or relevant documents to improve summarization of one single document based on the graph-based ranking algorithm. In addition to the within-document relationships between sentences in the specified document, the cross-document relationships between sentences in different documents are also taken into account in the proposed approach. We evaluate the performance of the proposed approach on DUC 2002 data with the ROUGE metric and results demonstrate that the cross-document relationships between sentences in different but related documents can significantly improve the performance of single document summarization.
    Source
    Research and advanced technology for digital libraries : 10th European conference, proceedings / ECDL 2006, Alicante, Spain, September 17 - 22, 2006
    Type
    a
  3. Wan, X.; Yang, J.; Xiao, J.: Towards a unified approach to document similarity search using manifold-ranking of blocks (2008) 0.00
    0.0031642143 = product of:
      0.0063284286 = sum of:
        0.0063284286 = product of:
          0.012656857 = sum of:
            0.012656857 = weight(_text_:a in 2081) [ClassicSimilarity], result of:
              0.012656857 = score(doc=2081,freq=28.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23833402 = fieldWeight in 2081, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2081)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Document similarity search (i.e. query by example) aims to retrieve a ranked list of documents similar to a query document in a text corpus or on the Web. Most existing approaches to similarity search first compute the pairwise similarity score between each document and the query using a retrieval function or similarity measure (e.g. Cosine), and then rank the documents by the similarity scores. In this paper, we propose a novel retrieval approach based on manifold-ranking of document blocks (i.e. a block of coherent text about a subtopic) to re-rank a small set of documents initially retrieved by some existing retrieval function. The proposed approach can make full use of the intrinsic global manifold structure of the document blocks by propagating the ranking scores between the blocks on a weighted graph. First, the TextTiling algorithm and the VIPS algorithm are respectively employed to segment text documents and web pages into blocks. Then, each block is assigned with a ranking score by the manifold-ranking algorithm. Lastly, a document gets its final ranking score by fusing the scores of its blocks. Experimental results on the TDT data and the ODP data demonstrate that the proposed approach can significantly improve the retrieval performances over baseline approaches. Document block is validated to be a better unit than the whole document in the manifold-ranking process.
    Type
    a
  4. Guo, L.; Wan, X.: Exploiting syntactic and semantic relationships between terms for opinion retrieval (2012) 0.00
    0.0028703054 = product of:
      0.005740611 = sum of:
        0.005740611 = product of:
          0.011481222 = sum of:
            0.011481222 = weight(_text_:a in 492) [ClassicSimilarity], result of:
              0.011481222 = score(doc=492,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2161963 = fieldWeight in 492, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=492)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Opinion retrieval is the task of finding documents that express an opinion about a given query. A key challenge in opinion retrieval is to capture the query-related opinion score of a document. Existing methods rely mainly on the proximity information between the opinion terms and the query terms to address the key challenge. In this study, we propose to incorporate the syntactic and semantic information of terms into a probabilistic model to capture the query-related opinion score more accurately. The syntactic tree structure of a sentence is used to evaluate the modifying probability between an opinion term and a noun within the sentence with a tree kernel method. Moreover, WordNet and the probabilistic topic model are used to evaluate the semantic relatedness between any noun and the given query. The experimental results over standard TREC baselines on the benchmark BLOG06 collection demonstrate the effectiveness of our proposed method, in comparison with the proximity-based method and other baselines.
    Type
    a
  5. Wan, X.; Liu, F.: WL-index : leveraging citation mention number to quantify an individual's scientific impact (2014) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 1549) [ClassicSimilarity], result of:
              0.00894975 = score(doc=1549,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 1549, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A number of bibliometric indices have been developed to evaluate an individual's scientific impact, and the most popular are the h-index and its variants. However, existing bibliometric indices are computed based on the number of citations received by each article, but they do not consider the frequency with which individual citations are mentioned in an article. We use "citation mention" to denote a unique occurrence of a cited reference mentioned in the citing article, and thus some citations may have more than one mention in an article. According to our analysis of the ACL Anthology Network corpus in the natural language processing field, more than 40% of cited references have been mentioned twice or in corresponding citing articles. We argue that citation mention is a preferable for representing the citation relationships between articles, that is, a reference article mentioned m times in the citing article will be considered to have received m citations, rather than one citation. Based on this assumption, we revise the h-index and propose a new bibliometric index, the WL-index, to evaluation an individual's scientific impact. According to our empirical analysis, the proposed WL-index more accurately discriminates between program committee chairs of reputable conferences and ordinary authors.
    Type
    a

Authors

Languages