Search (6 results, page 1 of 1)

  • × author_ss:"Chen, Z."
  1. Shen, D.; Yang, Q.; Chen, Z.: Noise reduction through summarization for Web-page classification (2007) 0.05
    0.045006108 = product of:
      0.13501832 = sum of:
        0.13501832 = product of:
          0.27003664 = sum of:
            0.27003664 = weight(_text_:page in 953) [ClassicSimilarity], result of:
              0.27003664 = score(doc=953,freq=14.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.97962785 = fieldWeight in 953, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.046875 = fieldNorm(doc=953)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Due to a large variety of noisy information embedded in Web pages, Web-page classification is much more difficult than pure-text classification. In this paper, we propose to improve the Web-page classification performance by removing the noise through summarization techniques. We first give empirical evidence that ideal Web-page summaries generated by human editors can indeed improve the performance of Web-page classification algorithms. We then put forward a new Web-page summarization algorithm based on Web-page layout and evaluate it along with several other state-of-the-art text summarization algorithms on the LookSmart Web directory. Experimental results show that the classification algorithms (NB or SVM) augmented by any summarization approach can achieve an improvement by more than 5.0% as compared to pure-text-based classification algorithms. We further introduce an ensemble method to combine the different summarization algorithms. The ensemble summarization method achieves more than 12.0% improvement over pure-text based methods.
  2. Ren, P.; Chen, Z.; Ma, J.; Zhang, Z.; Si, L.; Wang, S.: Detecting temporal patterns of user queries (2017) 0.04
    0.040800452 = product of:
      0.12240136 = sum of:
        0.12240136 = weight(_text_:query in 3315) [ClassicSimilarity], result of:
          0.12240136 = score(doc=3315,freq=6.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.5336404 = fieldWeight in 3315, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.046875 = fieldNorm(doc=3315)
      0.33333334 = coord(1/3)
    
    Abstract
    Query classification is an important part of exploring the characteristics of web queries. Existing studies are mainly based on Broder's classification scheme and classify user queries into navigational, informational, and transactional categories according to users' information needs. In this article, we present a novel classification scheme from the perspective of queries' temporal patterns. Queries' temporal patterns are inherent time series patterns of the search volumes of queries that reflect the evolution of the popularity of a query over time. By analyzing the temporal patterns of queries, search engines can more deeply understand the users' search intents and thus improve performance. Furthermore, we extract three groups of features based on the queries' search volume time series and use a support vector machine (SVM) to automatically detect the temporal patterns of user queries. Extensive experiments on the Million Query Track data sets of the Text REtrieval Conference (TREC) demonstrate the effectiveness of our approach.
  3. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.03
    0.02835118 = product of:
      0.08505354 = sum of:
        0.08505354 = product of:
          0.17010708 = sum of:
            0.17010708 = weight(_text_:page in 4132) [ClassicSimilarity], result of:
              0.17010708 = score(doc=4132,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.6171075 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4132)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  4. Chen, Z.; Meng, X.; Fowler, R.H.; Zhu, B.: Real-time adaptive feature and document learning for Web search (2001) 0.03
    0.027761191 = product of:
      0.08328357 = sum of:
        0.08328357 = weight(_text_:query in 5209) [ClassicSimilarity], result of:
          0.08328357 = score(doc=5209,freq=4.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.3630963 = fieldWeight in 5209, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5209)
      0.33333334 = coord(1/3)
    
    Abstract
    Chen et alia report on the design of FEATURES, a web search engine with adaptive features based on minimal relevance feedback. Rather than developing user profiles from previous searcher activity either at the server or client location, or updating indexes after search completion, FEATURES allows for index and user characterization files to be updated during query modification on retrieval from a general purpose search engine. Indexing terms relevant to a query are defined as the union of all terms assigned to documents retrieved by the initial search run and are used to build a vector space model on this retrieved set. The top ten weighted terms are presented to the user for a relevant non-relevant choice which is used to modify the term weights. Documents are chosen if their summed term weights are greater than some threshold. A user evaluation of the top ten ranked documents as non-relevant will decrease these term weights and a positive judgement will increase them. A new ordering of the retrieved set will generate new display lists of terms and documents. Precision is improved in a test on Alta Vista searches.
  5. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.02
    0.019630127 = product of:
      0.05889038 = sum of:
        0.05889038 = weight(_text_:query in 578) [ClassicSimilarity], result of:
          0.05889038 = score(doc=578,freq=2.0), product of:
            0.22937049 = queryWeight, product of:
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.049352113 = queryNorm
            0.25674784 = fieldWeight in 578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6476326 = idf(docFreq=1151, maxDocs=44218)
              0.0390625 = fieldNorm(doc=578)
      0.33333334 = coord(1/3)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  6. Chen, Z.; Wenyin, L.; Zhang, F.; Li, M.; Zhang, H.: Web mining for Web image retrieval (2001) 0.01
    0.01417559 = product of:
      0.04252677 = sum of:
        0.04252677 = product of:
          0.08505354 = sum of:
            0.08505354 = weight(_text_:page in 6521) [ClassicSimilarity], result of:
              0.08505354 = score(doc=6521,freq=2.0), product of:
                0.27565226 = queryWeight, product of:
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.049352113 = queryNorm
                0.30855376 = fieldWeight in 6521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.5854197 = idf(docFreq=450, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6521)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The popularity of digital images is rapidly increasing due to improving digital imaging technologies and convenient availability facilitated by the Internet. However, how to find user-intended images from the Internet is nontrivial. The main reason is that the Web images are usually not annotated using semantic descriptors. In this article, we present an effective approach to and a prototype system for image retrieval from the Internet using Web mining. The system can also serve as a Web image search engine. One of the key ideas in the approach is to extract the text information on the Web pages to semantically describe the images. The text description is then combined with other low-level image features in the image similarity assessment. Another main contribution of this work is that we apply data mining on the log of users' feedback to improve image retrieval performance in three aspects. First, the accuracy of the document space model of image representation obtained from the Web pages is improved by removing clutter and irrelevant text information. Second, to construct the user space model of users' representation of images, which is then combined with the document space model to eliminate mismatch between the page author's expression and the user's understanding and expectation. Third, to discover the relationship between low-level and high-level features, which is extremely useful for assigning the low-level features' weights in similarity assessment