Search (12 results, page 1 of 1)

  • × author_ss:"Li, X."
  1. Yan, X.; Li, X.; Song, D.: ¬A correlation analysis on LSA and HAL semantic space models (2004) 0.07
    0.07243208 = product of:
      0.14486416 = sum of:
        0.12153365 = weight(_text_:space in 2152) [ClassicSimilarity], result of:
          0.12153365 = score(doc=2152,freq=4.0), product of:
            0.24842183 = queryWeight, product of:
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.047605187 = queryNorm
            0.48922288 = fieldWeight in 2152, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2183776 = idf(docFreq=650, maxDocs=44218)
              0.046875 = fieldNorm(doc=2152)
        0.023330513 = product of:
          0.046661027 = sum of:
            0.046661027 = weight(_text_:model in 2152) [ClassicSimilarity], result of:
              0.046661027 = score(doc=2152,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.25490487 = fieldWeight in 2152, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2152)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this paper, we compare a well-known semantic spacemodel, Latent Semantic Analysis (LSA) with another model, Hyperspace Analogue to Language (HAL) which is widely used in different area, especially in automatic query refinement. We conduct this comparative analysis to prove our hypothesis that with respect to ability of extracting the lexical information from a corpus of text, LSA is quite similar to HAL. We regard HAL and LSA as black boxes. Through a Pearson's correlation analysis to the outputs of these two black boxes, we conclude that LSA highly co-relates with HAL and thus there is a justification that LSA and HAL can potentially play a similar role in the area of facilitating automatic query refinement. This paper evaluates LSA in a new application area and contributes an effective way to compare different semantic space models.
  2. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.05
    0.051396172 = product of:
      0.102792345 = sum of:
        0.08723867 = weight(_text_:vector in 2671) [ClassicSimilarity], result of:
          0.08723867 = score(doc=2671,freq=2.0), product of:
            0.30654848 = queryWeight, product of:
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.047605187 = queryNorm
            0.2845836 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.439392 = idf(docFreq=191, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.015553676 = product of:
          0.031107351 = sum of:
            0.031107351 = weight(_text_:model in 2671) [ClassicSimilarity], result of:
              0.031107351 = score(doc=2671,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.16993658 = fieldWeight in 2671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2671)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
  3. Li, X.: ¬A new robust relevance model in the language model framework (2008) 0.02
    0.017524866 = product of:
      0.070099466 = sum of:
        0.070099466 = product of:
          0.14019893 = sum of:
            0.14019893 = weight(_text_:model in 2076) [ClassicSimilarity], result of:
              0.14019893 = score(doc=2076,freq=26.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.7658938 = fieldWeight in 2076, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2076)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, a new robust relevance model is proposed that can be applied to both pseudo and true relevance feedback in the language-modeling framework for document retrieval. There are at least three main differences between our new relevance model and other relevance models. The proposed model brings back the original query into the relevance model by treating it as a short, special document, in addition to a number of top-ranked documents returned from the first round retrieval for pseudo feedback, or a number of relevant documents for true relevance feedback. Second, instead of using a uniform prior as in the original relevance model proposed by Lavrenko and Croft, documents are assigned with different priors according to their lengths (in terms) and ranks in the first round retrieval. Third, the probability of a term in the relevance model is further adjusted by its probability in a background language model. In both pseudo and true relevance cases, we have compared the performance of our model to that of the two baselines: the original relevance model and a linear combination model. Our experimental results show that the proposed new model outperforms both of the two baselines in terms of mean average precision.
  4. Li, X.; Cox, A.; Ford, N.; Creaser, C.; Fry, J.; Willett, P.: Knowledge construction by users : a content analysis framework and a knowledge construction process model for virtual product user communities (2017) 0.01
    0.010868462 = product of:
      0.043473847 = sum of:
        0.043473847 = product of:
          0.086947694 = sum of:
            0.086947694 = weight(_text_:model in 3574) [ClassicSimilarity], result of:
              0.086947694 = score(doc=3574,freq=10.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.4749872 = fieldWeight in 3574, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3574)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to develop a content analysis framework and from that derive a process model of knowledge construction in the context of virtual product user communities, organization sponsored online forums where product users collaboratively construct knowledge to solve their technical problems. Design/methodology/approach The study is based on a deductive and qualitative content analysis of discussion threads about solving technical problems selected from a series of virtual product user communities. Data are complemented with thematic analysis of interviews with forum members. Findings The research develops a content analysis framework for knowledge construction. It is based on a combination of existing codes derived from frameworks developed for computer-supported collaborative learning and new categories identified from the data. Analysis using this framework allows the authors to propose a knowledge construction process model showing how these elements are organized around a typical "trial and error" knowledge construction strategy. Practical implications The research makes suggestions about organizations' management of knowledge activities in virtual product user communities, including moderators' roles in facilitation. Originality/value The paper outlines a new framework for analysing knowledge activities where there is a low level of critical thinking and a model of knowledge construction by trial and error. The new framework and model can be applied in other similar contexts.
  5. Li, X.; Rijke, M.de: Characterizing and predicting downloads in academic search (2019) 0.01
    0.006873818 = product of:
      0.027495272 = sum of:
        0.027495272 = product of:
          0.054990545 = sum of:
            0.054990545 = weight(_text_:model in 5103) [ClassicSimilarity], result of:
              0.054990545 = score(doc=5103,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.30040827 = fieldWeight in 5103, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5103)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Numerous studies have been conducted on the information interaction behavior of search engine users. Few studies have considered information interactions in the domain of academic search. We focus on conversion behavior in this domain. Conversions have been widely studied in the e-commerce domain, e.g., for online shopping and hotel booking, but little is known about conversions in academic search. We start with a description of a unique dataset of a particular type of conversion in academic search, viz. users' downloads of scientific papers. Then we move to an observational analysis of users' download actions. We first characterize user actions and show their statistics in sessions. Then we focus on behavioral and topical aspects of downloads, revealing behavioral correlations across download sessions. We discover unique properties that differ from other conversion settings such as online shopping. Using insights gained from these observations, we consider the task of predicting the next download. In particular, we focus on predicting the time until the next download session, and on predicting the number of downloads. We cast these as time series prediction problems and model them using LSTMs. We develop a specialized model built on user segmentations that achieves significant improvements over the state-of-the art.
  6. Xu, G.; Cao, Y.; Ren, Y.; Li, X.; Feng, Z.: Network security situation awareness based on semantic ontology and user-defined rules for Internet of Things (2017) 0.01
    0.006873818 = product of:
      0.027495272 = sum of:
        0.027495272 = product of:
          0.054990545 = sum of:
            0.054990545 = weight(_text_:model in 306) [ClassicSimilarity], result of:
              0.054990545 = score(doc=306,freq=4.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.30040827 = fieldWeight in 306, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=306)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Internet of Things (IoT) brings the third development wave of the global information industry which makes users, network and perception devices cooperate more closely. However, if IoT has security problems, it may cause a variety of damage and even threaten human lives and properties. To improve the abilities of monitoring, providing emergency response and predicting the development trend of IoT security, a new paradigm called network security situation awareness (NSSA) is proposed. However, it is limited by its ability to mine and evaluate security situation elements from multi-source heterogeneous network security information. To solve this problem, this paper proposes an IoT network security situation awareness model using situation reasoning method based on semantic ontology and user-defined rules. Ontology technology can provide a unified and formalized description to solve the problem of semantic heterogeneity in the IoT security domain. In this paper, four key sub-domains are proposed to reflect an IoT security situation: context, attack, vulnerability and network flow. Further, user-defined rules can compensate for the limited description ability of ontology, and hence can enhance the reasoning ability of our proposed ontology model. The examples in real IoT scenarios show that the ability of the network security situation awareness that adopts our situation reasoning method is more comprehensive and more powerful reasoning abilities than the traditional NSSA methods. [http://ieeexplore.ieee.org/abstract/document/7999187/]
  7. Zhang, Y.; Li, X.; Fan, W.: User adoption of physician's replies in an online health community : an empirical study (2020) 0.01
    0.0058326283 = product of:
      0.023330513 = sum of:
        0.023330513 = product of:
          0.046661027 = sum of:
            0.046661027 = weight(_text_:model in 4) [ClassicSimilarity], result of:
              0.046661027 = score(doc=4,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.25490487 = fieldWeight in 4, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Online health question-and-answer consultation with physicians is becoming a common phenomenon. However, it is unclear how users identify the most satisfying reply. Based on the dual-process theory of knowledge adoption, we developed a conceptual model and empirical method to study which factors influence adoption of a reply. We extracted 6 variables for argument quality (Ease of understanding, Relevance, Completeness, Objectivity, Timeliness, Structure) and 4 for source credibility (Physician's online experience, Physician's offline expertise, Hospital location, Hospital level). The empirical results indicate that both central and peripheral routes affect user's adoption of a response. Physician's offline expertise negatively affects user's adoption decision, while physician's online experience positively affects it; this effect is positively moderated by user involvement.
  8. Su, S.; Li, X.; Cheng, X.; Sun, C.: Location-aware targeted influence maximization in social networks (2018) 0.00
    0.0048605236 = product of:
      0.019442094 = sum of:
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 4034) [ClassicSimilarity], result of:
              0.03888419 = score(doc=4034,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 4034, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4034)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this paper, we study the location-aware targeted influence maximization problem in social networks, which finds a seed set to maximize the influence spread over the targeted users. In particular, we consider those users who have both topic and geographical preferences on promotion products as targeted users. To efficiently solve this problem, one challenge is how to find the targeted users and compute their preferences efficiently for given requests. To address this challenge, we devise a TR-tree index structure, where each tree node stores users' topic and geographical preferences. By traversing the TR-tree in depth-first order, we can efficiently find the targeted users. Another challenge of the problem is to devise algorithms for efficient seeds selection. We solve this challenge from two complementary directions. In one direction, we adopt the maximum influence arborescence (MIA) model to approximate the influence spread, and propose two efficient approximation algorithms with math formula approximation ratio, which prune some candidate seeds with small influences by precomputing users' initial influences offline and estimating the upper bound of their marginal influences online. In the other direction, we propose a fast heuristic algorithm to improve efficiency. Experiments conducted on real-world data sets demonstrate the effectiveness and efficiency of our proposed algorithms.
  9. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.00
    0.0048605236 = product of:
      0.019442094 = sum of:
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 5505) [ClassicSimilarity], result of:
              0.03888419 = score(doc=5505,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 5505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5505)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
  10. Li, X.: Young people's information practices in library makerspaces (2021) 0.00
    0.0048605236 = product of:
      0.019442094 = sum of:
        0.019442094 = product of:
          0.03888419 = sum of:
            0.03888419 = weight(_text_:model in 245) [ClassicSimilarity], result of:
              0.03888419 = score(doc=245,freq=2.0), product of:
                0.1830527 = queryWeight, product of:
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.047605187 = queryNorm
                0.21242073 = fieldWeight in 245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.845226 = idf(docFreq=2569, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=245)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    While there have been a growing number of studies on makerspaces in different disciplines, little is known about how young people interact with information in makerspaces. This study aimed to unpack how young people (middle and high schoolers) sought, used, and shared information in voluntary free-choice library makerspace activities. Qualitative methods, including individual interviews, observations, photovoice, and focus groups, were used to elicit 21 participants' experiences at two library makerspaces. The findings showed that young people engaged in dynamic practices of information seeking, use, and sharing, and revealed how the historical, sociocultural, material, and technological contexts embedded in makerspace activities shaped these information practices. Information practices of tinkering, sensing, and imagining in makerspaces were highlighted. Various criteria that young people used in evaluating human sources and online information were identified as well. The study also demonstrated the communicative and collaborative aspects of young people's information practices through information sharing. The findings extended Savolainen's everyday information practices model and addressed the gap in the current literature on young people's information behavior and information practices. Understanding how young people interact with information in makerspaces can help makerspace facilitators and information professionals better support youth services and facilitate makerspace activities.
  11. Li, X.: Designing an interactive Web tutorial with cross-browser dynamic HTML (2000) 0.00
    0.004837384 = product of:
      0.019349536 = sum of:
        0.019349536 = product of:
          0.03869907 = sum of:
            0.03869907 = weight(_text_:22 in 4897) [ClassicSimilarity], result of:
              0.03869907 = score(doc=4897,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.23214069 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4897)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    28. 1.2006 19:21:22
  12. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.00
    0.0040311534 = product of:
      0.016124614 = sum of:
        0.016124614 = product of:
          0.032249227 = sum of:
            0.032249227 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
              0.032249227 = score(doc=2593,freq=2.0), product of:
                0.16670525 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047605187 = queryNorm
                0.19345059 = fieldWeight in 2593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2593)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22