Search (8 results, page 1 of 1)

  • × author_ss:"Chen, Z."
  1. Chen, Z.; Meng, X.; Fowler, R.H.; Zhu, B.: Real-time adaptive feature and document learning for Web search (2001) 0.15
    0.15007861 = product of:
      0.20010482 = sum of:
        0.10904834 = weight(_text_:vector in 5209) [ClassicSimilarity], result of:
          0.10904834 = score(doc=5209,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.3557295 = fieldWeight in 5209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5209)
        0.07161439 = weight(_text_:space in 5209) [ClassicSimilarity], result of:
          0.07161439 = score(doc=5209,freq=2.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.28827736 = fieldWeight in 5209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5209)
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 5209) [ClassicSimilarity], result of:
              0.03888419 = score(doc=5209,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 5209, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5209)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Chen et alia report on the design of FEATURES, a web search engine with adaptive features based on minimal relevance feedback. Rather than developing user profiles from previous searcher activity either at the server or client location, or updating indexes after search completion, FEATURES allows for index and user characterization files to be updated during query modification on retrieval from a general purpose search engine. Indexing terms relevant to a query are defined as the union of all terms assigned to documents retrieved by the initial search run and are used to build a vector space model on this retrieved set. The top ten weighted terms are presented to the user for a relevant non-relevant choice which is used to modify the term weights. Documents are chosen if their summed term weights are greater than some threshold. A user evaluation of the top ten ranked documents as non-relevant will decrease these term weights and a positive judgement will increase them. A new ordering of the retrieved set will generate new display lists of terms and documents. Precision is improved in a test on Alta Vista searches.
  2. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.13
    0.12774785 = product of:
      0.2554957 = sum of:
        0.15421765 = weight(_text_:vector in 578) [ClassicSimilarity], result of:
          0.15421765 = score(doc=578,freq=4.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.5030775 = fieldWeight in 578, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.0390625 = fieldNorm(doc=578)
        0.101278044 = weight(_text_:space in 578) [ClassicSimilarity], result of:
          0.101278044 = score(doc=578,freq=4.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.40768576 = fieldWeight in 578, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=578)
      0.5 = coord(2/4)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  3. Chen, Z.; Wenyin, L.; Zhang, F.; Li, M.; Zhang, H.: Web mining for Web image retrieval (2001) 0.08
    0.078857236 = product of:
      0.15771447 = sum of:
        0.12403977 = weight(_text_:space in 6521) [ClassicSimilarity], result of:
          0.12403977 = score(doc=6521,freq=6.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.49931106 = fieldWeight in 6521, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6521)
        0.0336747 = product of:
          0.0673494 = sum of:
            0.0673494 = weight(_text_:model in 6521) [ClassicSimilarity], result of:
              0.0673494 = score(doc=6521,freq=6.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.36792353 = fieldWeight in 6521, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6521)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The popularity of digital images is rapidly increasing due to improving digital imaging technologies and convenient availability facilitated by the Internet. However, how to find user-intended images from the Internet is nontrivial. The main reason is that the Web images are usually not annotated using semantic descriptors. In this article, we present an effective approach to and a prototype system for image retrieval from the Internet using Web mining. The system can also serve as a Web image search engine. One of the key ideas in the approach is to extract the text information on the Web pages to semantically describe the images. The text description is then combined with other low-level image features in the image similarity assessment. Another main contribution of this work is that we apply data mining on the log of users' feedback to improve image retrieval performance in three aspects. First, the accuracy of the document space model of image representation obtained from the Web pages is improved by removing clutter and irrelevant text information. Second, to construct the user space model of users' representation of images, which is then combined with the document space model to eliminate mismatch between the page author's expression and the user's understanding and expectation. Third, to discover the relationship between low-level and high-level features, which is extremely useful for assigning the low-level features' weights in similarity assessment
  4. Ren, P.; Chen, Z.; Ma, J.; Zhang, Z.; Si, L.; Wang, S.: Detecting temporal patterns of user queries (2017) 0.03
    0.0327145 = product of:
      0.130858 = sum of:
        0.130858 = weight(_text_:vector in 3315) [ClassicSimilarity], result of:
          0.130858 = score(doc=3315,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.4268754 = fieldWeight in 3315, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.046875 = fieldNorm(doc=3315)
      0.25 = coord(1/4)
    
    Abstract
    Query classification is an important part of exploring the characteristics of web queries. Existing studies are mainly based on Broder's classification scheme and classify user queries into navigational, informational, and transactional categories according to users' information needs. In this article, we present a novel classification scheme from the perspective of queries' temporal patterns. Queries' temporal patterns are inherent time series patterns of the search volumes of queries that reflect the evolution of the popularity of a query over time. By analyzing the temporal patterns of queries, search engines can more deeply understand the users' search intents and thus improve performance. Furthermore, we extract three groups of features based on the queries' search volume time series and use a support vector machine (SVM) to automatically detect the temporal patterns of user queries. Extensive experiments on the Million Query Track data sets of the Text REtrieval Conference (TREC) demonstrate the effectiveness of our approach.
  5. Chen, Z.: ¬A conceptual model for storage and retrieval of short scientific texts (1993) 0.01
    0.013747636 = product of:
      0.054990545 = sum of:
        0.054990545 = product of:
          0.10998109 = sum of:
            0.10998109 = weight(_text_:model in 2715) [ClassicSimilarity], result of:
              0.10998109 = score(doc=2715,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.60081655 = fieldWeight in 2715, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2715)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A conceptual model for integrating short scientific texts is described, which extends classical text storage and retrieval. A brief comparison with related approaches (such as case-based reasoning and classification-based reasoning) is also provided
  6. Lee, M.K.O.; Cheung, C.M.K.; Chen, Z.: Understanding user acceptance of multimedia messaging services : an empirical study (2007) 0.01
    0.0082485825 = product of:
      0.03299433 = sum of:
        0.03299433 = product of:
          0.06598866 = sum of:
            0.06598866 = weight(_text_:model in 622) [ClassicSimilarity], result of:
              0.06598866 = score(doc=622,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.36048993 = fieldWeight in 622, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=622)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Multimedia Messaging Services (MMS) is a new medium that enriches people's personal communication with their business partners, friends, or family. Following the success of Short Message Services, MMS has the potential to be the next mobile commerce killer application which is useful and popular among consumers; however, little is known about why people intend to accept and use it. Building upon the motivational theory and media richness theory, the research model captures both extrinsic (e.g., perceived usefulness and perceived ease of use) and intrinsic (e.g., perceived enjoyment) motivators as well as perceived media richness to explain user intention to use MMS. An online survey was conducted and 207 completed questionnaires were collected. By integrating the motivation and the media richness perspectives, the research model explains 65% of the variance. In addition, the results present strong support to the existing theoretical links as well as to those newly hypothesized in this study. Implications from the current investigation for research and practice are provided.
  7. Chen, Z.; Huang, Y.; Tian, J.; Liu, X.; Fu, K.; Huang, T.: Joint model for subsentence-level sentiment analysis with Markov logic (2015) 0.01
    0.006873818 = product of:
      0.027495272 = sum of:
        0.027495272 = product of:
          0.054990545 = sum of:
            0.054990545 = weight(_text_:model in 2210) [ClassicSimilarity], result of:
              0.054990545 = score(doc=2210,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.30040827 = fieldWeight in 2210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2210)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Sentiment analysis mainly focuses on the study of one's opinions that express positive or negative sentiments. With the explosive growth of web documents, sentiment analysis is becoming a hot topic in both academic research and system design. Fine-grained sentiment analysis is traditionally solved as a 2-step strategy, which results in cascade errors. Although joint models, such as joint sentiment/topic and maximum entropy (MaxEnt)/latent Dirichlet allocation, are proposed to tackle this problem of sentiment analysis, they focus on the joint learning of both aspects and sentiments. Thus, they are not appropriate to solve the cascade errors for sentiment analysis at the sentence or subsentence level. In this article, we present a novel jointly fine-grained sentiment analysis framework at the subsentence level with Markov logic. First, we divide the task into 2 separate stages (subjectivity classification and polarity classification). Then, the 2 separate stages are processed, respectively, with different feature sets, which are implemented by local formulas in Markov logic. Finally, global formulas in Markov logic are adopted to realize the interactions of the 2 separate stages. The joint inference of subjectivity and polarity helps prevent cascade errors. Experiments on a Chinese sentiment data set manifest that our joint model brings significant improvements.
  8. Xu, Y.C..; Chen, Z.: Relevance judgment : what do information users consider beyond topicality? (2006) 0.01
    0.006804733 = product of:
      0.027218932 = sum of:
        0.027218932 = product of:
          0.054437865 = sum of:
            0.054437865 = weight(_text_:model in 5073) [ClassicSimilarity], result of:
              0.054437865 = score(doc=5073,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.29738903 = fieldWeight in 5073, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5073)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    How does an information user perceive a document as relevant? The literature on relevance has identified numerous factors affecting such a judgment. Taking a cognitive approach, this study focuses on the criteria users employ in making relevance judgment beyond topicality. On the basis of Grice's theory of communication, we propose a five-factor model of relevance: topicality, novelty, reliability, understandability, and scope. Data are collected from a semicontrolled survey and analyzed by following a psychometric procedure. Topicality and novelty are found to be the two essential relevance criteria. Understandability and reliability are also found to be significant, but scope is not. The theoretical and practical implications of this study are discussed.