Search (8 results, page 1 of 1)

  • × author_ss:"Li, H."
  1. Li, H.; Bhowmick, S.S.; Sun, A.: AffRank: affinity-driven ranking of products in online social rating networks (2011) 0.00
    0.0024032309 = product of:
      0.0048064617 = sum of:
        0.0048064617 = product of:
          0.0096129235 = sum of:
            0.0096129235 = weight(_text_:a in 4483) [ClassicSimilarity], result of:
              0.0096129235 = score(doc=4483,freq=20.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.20142901 = fieldWeight in 4483, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Large online social rating networks (e.g., Epinions, Blippr) have recently come into being containing information related to various types of products. Typically, each product in these networks is associated with a group of members who have provided ratings and comments on it. These people form a product community. A potential member can join a product community by giving a new rating to the product. We refer to this phenomenon of a product community's ability to "attract" new members as product affinity. The knowledge of a ranked list of products based on product affinity is of much importance for implementing policies, marketing research, online advertisement, and other applications. In this article, we identify and analyze an array of features that exert effect on product affinity and propose a novel model, called AffRank, that utilizes these features to predict the future rank of products according to their affinities. Evaluated on two real-world datasets, we demonstrate the effectiveness and superior prediction quality of AffRank compared with baseline methods. Our experiments show that features such as affinity rank history, affinity evolution distance, and average rating are the most important factors affecting future rank of products. At the same time, interestingly, traditional community features (e.g., community size, member connectivity, and social context) have negligible influence on product affinities.
    Type
    a
  2. Lee, K.C.; Lee, N.; Li, H.: ¬A particle swarm optimization-driven cognitive map approach to analyzing information systems project risk (2009) 0.00
    0.002279905 = product of:
      0.00455981 = sum of:
        0.00455981 = product of:
          0.00911962 = sum of:
            0.00911962 = weight(_text_:a in 2855) [ClassicSimilarity], result of:
              0.00911962 = score(doc=2855,freq=18.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.19109234 = fieldWeight in 2855, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2855)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Project risks encompass both internal and external factors that are interrelated, influencing others in a causal way. It is very important to identify those factors and their causal relationships to reduce the project risk. In the past, most IT companies evaluate project risk by roughly measuring the related factors, but ignoring the important fact that there are complicated causal relationships among them. There is a strong need to develop more effective mechanisms to systematically judge all factors related to project risk and identify the causal relationships among those factors. To accomplish this research objective, our study adopts a cognitive map (CM)-based mechanism called the MACOM (Multi-Agents COgnitive Map), where CM is represented by a set of multi-agents, each embedded with basic intelligence to determine its causal relationships with other agents. CM has proven especially useful in solving unstructured problems with many variables and causal relationships; however, simply applying CM to project risk management is not enough because most causal relationships are hard to identify and measure exactly. To overcome this problem, we have borrowed a multi-agent metaphor in which CM is represented by a set of multi-agents, and project risk is explained through the interaction of the multi-agents. Such an approach presents a new computational capability for resolving complicated decision problems. Using the MACOM framework, we have proved that the task of resolving the IS project risk management could be systematically and intelligently solved, and in this way, IS project managers can be given robust decision support.
    Type
    a
  3. Qin, T.; Zhang, X.-D.; Tsai, M.-F.; Wang, D.-S.; Liu, T.-Y.; Li, H.: Query-level loss functions for information retrieval (2008) 0.00
    0.0020106873 = product of:
      0.0040213745 = sum of:
        0.0040213745 = product of:
          0.008042749 = sum of:
            0.008042749 = weight(_text_:a in 2066) [ClassicSimilarity], result of:
              0.008042749 = score(doc=2066,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1685276 = fieldWeight in 2066, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2066)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many machine learning technologies such as support vector machines, boosting, and neural networks have been applied to the ranking problem in information retrieval. However, since originally the methods were not developed for this task, their loss functions do not directly link to the criteria used in the evaluation of ranking. Specifically, the loss functions are defined on the level of documents or document pairs, in contrast to the fact that the evaluation criteria are defined on the level of queries. Therefore, minimizing the loss functions does not necessarily imply enhancing ranking performances. To solve this problem, we propose using query-level loss functions in learning of ranking functions. We discuss the basic properties that a query-level loss function should have and propose a query-level loss function based on the cosine similarity between a ranking list and the corresponding ground truth. We further design a coordinate descent algorithm, referred to as RankCosine, which utilizes the proposed loss function to create a generalized additive ranking model. We also discuss whether the loss functions of existing ranking algorithms can be extended to query-level. Experimental results on the datasets of TREC web track, OHSUMED, and a commercial web search engine show that with the use of the proposed query-level loss function we can significantly improve ranking accuracies. Furthermore, we found that it is difficult to extend the document-level loss functions to query-level loss functions.
    Type
    a
  4. Li, M.; Li, H.; Zhou, Z.-H.: Semi-supervised document retrieval (2009) 0.00
    0.0020106873 = product of:
      0.0040213745 = sum of:
        0.0040213745 = product of:
          0.008042749 = sum of:
            0.008042749 = weight(_text_:a in 4218) [ClassicSimilarity], result of:
              0.008042749 = score(doc=4218,freq=14.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1685276 = fieldWeight in 4218, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper proposes a new machine learning method for constructing ranking models in document retrieval. The method, which is referred to as SSRank, aims to use the advantages of both the traditional Information Retrieval (IR) methods and the supervised learning methods for IR proposed recently. The advantages include the use of limited amount of labeled data and rich model representation. To do so, the method adopts a semi-supervised learning framework in ranking model construction. Specifically, given a small number of labeled documents with respect to some queries, the method effectively labels the unlabeled documents for the queries. It then uses all the labeled data to train a machine learning model (in our case, Neural Network). In the data labeling, the method also makes use of a traditional IR model (in our case, BM25). A stopping criterion based on machine learning theory is given for the data labeling process. Experimental results on three benchmark datasets and one web search dataset indicate that SSRank consistently and almost always significantly outperforms the baseline methods (unsupervised and supervised learning methods), given the same amount of labeled data. This is because SSRank can effectively leverage the use of unlabeled data in learning.
    Type
    a
  5. Kuo, J.-S.; Li, H.; Yang, Y.-K.: Active learning for constructing transliteration lexicons from the Web (2008) 0.00
    0.0018428253 = product of:
      0.0036856506 = sum of:
        0.0036856506 = product of:
          0.0073713013 = sum of:
            0.0073713013 = weight(_text_:a in 1345) [ClassicSimilarity], result of:
              0.0073713013 = score(doc=1345,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.1544581 = fieldWeight in 1345, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1345)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents an adaptive learning framework for Phonetic Similarity Modeling (PSM) that supports the automatic construction of transliteration lexicons. The learning algorithm starts with minimum prior knowledge about machine transliteration and acquires knowledge iteratively from the Web. We study the unsupervised learning and the active learning strategies that minimize human supervision in terms of data labeling. The learning process refines the PSM and constructs a transliteration lexicon at the same time. We evaluate the proposed PSM and its learning algorithm through a series of systematic experiments, which show that the proposed framework is reliably effective on two independent databases.
    Type
    a
  6. Li, H.; Wu, H.; Li, D.; Lin, S.; Su, Z.; Luo, X.: PSI: A probabilistic semantic interpretable framework for fine-grained image ranking (2018) 0.00
    0.001823924 = product of:
      0.003647848 = sum of:
        0.003647848 = product of:
          0.007295696 = sum of:
            0.007295696 = weight(_text_:a in 4577) [ClassicSimilarity], result of:
              0.007295696 = score(doc=4577,freq=8.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.15287387 = fieldWeight in 4577, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Image Ranking is one of the key problems in information science research area. However, most current methods focus on increasing the performance, leaving the semantic gap problem, which refers to the learned ranking models are hard to be understood, remaining intact. Therefore, in this article, we aim at learning an interpretable ranking model to tackle the semantic gap in fine-grained image ranking. We propose to combine attribute-based representation and online passive-aggressive (PA) learning based ranking models to achieve this goal. Besides, considering the highly localized instances in fine-grained image ranking, we introduce a supervised constrained clustering method to gather class-balanced training instances for local PA-based models, and incorporate the learned local models into a unified probabilistic framework. Extensive experiments on the benchmark demonstrate that the proposed framework outperforms state-of-the-art methods in terms of accuracy and speed.
    Type
    a
  7. Su, Z.; Li, D.; Li, H.; Luo, X.: Boosting attribute recognition with latent topics by matrix factorization (2017) 0.00
    0.0015795645 = product of:
      0.003159129 = sum of:
        0.003159129 = product of:
          0.006318258 = sum of:
            0.006318258 = weight(_text_:a in 3693) [ClassicSimilarity], result of:
              0.006318258 = score(doc=3693,freq=6.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.13239266 = fieldWeight in 3693, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Attribute-based approaches have recently attracted much attention in visual recognition tasks. These approaches describe images by using semantic attributes as the mid-level feature. However, low recognition accuracy becomes the biggest barrier that limits their practical applications. In this paper, we propose a novel framework termed Boosting Attribute Recognition (BAR) for the image recognition task. Our framework stems from matrix factorization, and can explore latent relationships from the aspect of attribute and image simultaneously. Furthermore, to apply our framework in large-scale visual recognition tasks, we present both offline and online learning implementation of the proposed framework. Extensive experiments on 3 data sets demonstrate that our framework achieves a sound accuracy of attribute recognition.
    Type
    a
  8. Jiang, Y.-C.; Li, H.: ¬The theoretical basis and basic principles of knowledge network construction in digital library (2023) 0.00
    0.001289709 = product of:
      0.002579418 = sum of:
        0.002579418 = product of:
          0.005158836 = sum of:
            0.005158836 = weight(_text_:a in 1130) [ClassicSimilarity], result of:
              0.005158836 = score(doc=1130,freq=4.0), product of:
                0.04772363 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.041389145 = queryNorm
                0.10809815 = fieldWeight in 1130, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1130)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge network construction (KNC) is the essence of dynamic knowledge architecture, and is helpful to illustrate ubiquitous knowledge service in digital libraries (DLs). The authors explore its theoretical foundations and basic rules to elucidate the basic principles of KNC in DLs. The results indicate that world general connection, small-world phenomenon, relevance theory, unity and continuity of science development have been the production tool, architecture aim and scientific foundation of KNC in DLs. By analyzing both the characteristics of KNC based on different types of knowledge linking and the relationships between different forms of knowledge and the appropriate ways of knowledge linking, the basic principle of KNC is summarized as follows: let each kind of knowledge linking form each shows its ability, each kind of knowledge manifestation each answer the purpose intended in practice, and then subjective knowledge network and objective knowledge network are organically combined. This will lay a solid theoretical foundation and provide an action guide for DLs to construct knowledge networks.
    Type
    a