Search (9 results, page 1 of 1)

  • × author_ss:"Li, X."
  1. Su, S.; Li, X.; Cheng, X.; Sun, C.: Location-aware targeted influence maximization in social networks (2018) 0.00
    0.0022137975 = product of:
      0.01771038 = sum of:
        0.01771038 = product of:
          0.05313114 = sum of:
            0.05313114 = weight(_text_:problem in 4034) [ClassicSimilarity], result of:
              0.05313114 = score(doc=4034,freq=6.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.4061259 = fieldWeight in 4034, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4034)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, we study the location-aware targeted influence maximization problem in social networks, which finds a seed set to maximize the influence spread over the targeted users. In particular, we consider those users who have both topic and geographical preferences on promotion products as targeted users. To efficiently solve this problem, one challenge is how to find the targeted users and compute their preferences efficiently for given requests. To address this challenge, we devise a TR-tree index structure, where each tree node stores users' topic and geographical preferences. By traversing the TR-tree in depth-first order, we can efficiently find the targeted users. Another challenge of the problem is to devise algorithms for efficient seeds selection. We solve this challenge from two complementary directions. In one direction, we adopt the maximum influence arborescence (MIA) model to approximate the influence spread, and propose two efficient approximation algorithms with math formula approximation ratio, which prune some candidate seeds with small influences by precomputing users' initial influences offline and estimating the upper bound of their marginal influences online. In the other direction, we propose a fast heuristic algorithm to improve efficiency. Experiments conducted on real-world data sets demonstrate the effectiveness and efficiency of our proposed algorithms.
  2. Xu, G.; Cao, Y.; Ren, Y.; Li, X.; Feng, Z.: Network security situation awareness based on semantic ontology and user-defined rules for Internet of Things (2017) 0.00
    0.0018075579 = product of:
      0.014460463 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 306) [ClassicSimilarity], result of:
              0.04338139 = score(doc=306,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 306, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Internet of Things (IoT) brings the third development wave of the global information industry which makes users, network and perception devices cooperate more closely. However, if IoT has security problems, it may cause a variety of damage and even threaten human lives and properties. To improve the abilities of monitoring, providing emergency response and predicting the development trend of IoT security, a new paradigm called network security situation awareness (NSSA) is proposed. However, it is limited by its ability to mine and evaluate security situation elements from multi-source heterogeneous network security information. To solve this problem, this paper proposes an IoT network security situation awareness model using situation reasoning method based on semantic ontology and user-defined rules. Ontology technology can provide a unified and formalized description to solve the problem of semantic heterogeneity in the IoT security domain. In this paper, four key sub-domains are proposed to reflect an IoT security situation: context, attack, vulnerability and network flow. Further, user-defined rules can compensate for the limited description ability of ontology, and hence can enhance the reasoning ability of our proposed ontology model. The examples in real IoT scenarios show that the ability of the network security situation awareness that adopts our situation reasoning method is more comprehensive and more powerful reasoning abilities than the traditional NSSA methods. [http://ieeexplore.ieee.org/abstract/document/7999187/]
  3. Li, X.; Schijvenaars, B.J.A.; Rijke, M.de: Investigating queries and search failures in academic search (2017) 0.00
    0.0014460464 = product of:
      0.011568371 = sum of:
        0.011568371 = product of:
          0.034705114 = sum of:
            0.034705114 = weight(_text_:problem in 5033) [ClassicSimilarity], result of:
              0.034705114 = score(doc=5033,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.2652803 = fieldWeight in 5033, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5033)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Academic search concerns the retrieval and profiling of information objects in the domain of academic research. In this paper we reveal important observations of academic search queries, and provide an algorithmic solution to address a type of failure during search sessions: null queries. We start by providing a general characterization of academic search queries, by analyzing a large-scale transaction log of a leading academic search engine. Unlike previous small-scale analyses of academic search queries, we find important differences with query characteristics known from web search. E.g., in academic search there is a substantially bigger proportion of entity queries, and a heavier tail in query length distribution. We then focus on search failures and, in particular, on null queries that lead to an empty search engine result page, on null sessions that contain such null queries, and on users who are prone to issue null queries. In academic search approximately 1 in 10 queries is a null query, and 25% of the sessions contain a null query. They appear in different types of search sessions, and prevent users from achieving their search goal. To address the high rate of null queries in academic search, we consider the task of providing query suggestions. Specifically we focus on a highly frequent query type: non-boolean informational queries. To this end we need to overcome query sparsity and make effective use of session information. We find that using entities helps to surface more relevant query suggestions in the face of query sparsity. We also find that query suggestions should be conditioned on the type of session in which they are offered to be more effective. After casting the session classification problem as a multi-label classification problem, we generate session-conditional query suggestions based on predicted session type. We find that this session-conditional method leads to significant improvements over a generic query suggestion method. Personalization yields very little further improvements over session-conditional query suggestions.
  4. Lu, W.; Li, X.; Liu, Z.; Cheng, Q.: How do author-selected keywords function semantically in scientific manuscripts? (2019) 0.00
    0.0012781365 = product of:
      0.010225092 = sum of:
        0.010225092 = product of:
          0.030675275 = sum of:
            0.030675275 = weight(_text_:problem in 5453) [ClassicSimilarity], result of:
              0.030675275 = score(doc=5453,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23447686 = fieldWeight in 5453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5453)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Author-selected keywords have been widely utilized for indexing, information retrieval, bibliometrics and knowledge organization in previous studies. However, few studies exist con-cerning how author-selected keywords function semantically in scientific manuscripts. In this paper, we investigated this problem from the perspective of term function (TF) by devising indica-tors of the diversity and symmetry of keyword term functions in papers, as well as the intensity of individual term functions in papers. The data obtained from the whole Journal of Informetrics(JOI) were manually processed by an annotation scheme of key-word term functions, including "research topic," "research method," "research object," "research area," "data" and "others," based on empirical work in content analysis. The results show, quantitatively, that the diversity of keyword term function de-creases, and the irregularity increases with the number of author-selected keywords in a paper. Moreover, the distribution of the intensity of individual keyword term function indicated that no significant difference exists between the ranking of the five term functions with the increase of the number of author-selected keywords (i.e., "research topic" > "research method" > "research object" > "research area" > "data"). The findings indicate that precise keyword related research must take into account the dis-tinct types of author-selected keywords.
  5. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.00
    0.0012781365 = product of:
      0.010225092 = sum of:
        0.010225092 = product of:
          0.030675275 = sum of:
            0.030675275 = weight(_text_:problem in 5505) [ClassicSimilarity], result of:
              0.030675275 = score(doc=5505,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23447686 = fieldWeight in 5505, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5505)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
  6. Li, X.: Designing an interactive Web tutorial with cross-browser dynamic HTML (2000) 0.00
    0.0010439953 = product of:
      0.008351962 = sum of:
        0.008351962 = product of:
          0.025055885 = sum of:
            0.025055885 = weight(_text_:22 in 4897) [ClassicSimilarity], result of:
              0.025055885 = score(doc=4897,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.23214069 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4897)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    28. 1.2006 19:21:22
  7. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.00
    0.0010225092 = product of:
      0.008180073 = sum of:
        0.008180073 = product of:
          0.02454022 = sum of:
            0.02454022 = weight(_text_:problem in 2671) [ClassicSimilarity], result of:
              0.02454022 = score(doc=2671,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.1875815 = fieldWeight in 2671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2671)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
  8. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.00
    8.7789324E-4 = product of:
      0.007023146 = sum of:
        0.007023146 = product of:
          0.021069437 = sum of:
            0.021069437 = weight(_text_:29 in 5045) [ClassicSimilarity], result of:
              0.021069437 = score(doc=5045,freq=2.0), product of:
                0.108422816 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19432661 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5045)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    15. 3.2019 18:55:29
  9. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.00
    8.699961E-4 = product of:
      0.0069599687 = sum of:
        0.0069599687 = product of:
          0.020879906 = sum of:
            0.020879906 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
              0.020879906 = score(doc=2593,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 2593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2593)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    20. 1.2015 18:30:22