Search (9 results, page 1 of 1)

  • × author_ss:"Li, X."
  1. Li, X.: Designing an interactive Web tutorial with cross-browser dynamic HTML (2000) 0.02
    0.019866327 = product of:
      0.06953214 = sum of:
        0.052837856 = weight(_text_:techniques in 4897) [ClassicSimilarity], result of:
          0.052837856 = score(doc=4897,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.2920283 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.046875 = fieldNorm(doc=4897)
        0.016694285 = product of:
          0.03338857 = sum of:
            0.03338857 = weight(_text_:22 in 4897) [ClassicSimilarity], result of:
              0.03338857 = score(doc=4897,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.23214069 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4897)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Texas A&M University Libraries developed a Web-based training (WBT) application for LandView III, a federal depository CD-ROM publication using cross-browser dynamic HTML (DHTML) and other Web technologies. The interactive and self-paced tutorial demonstrates the major features of the CD-ROM and shows how to navigate the programs. The tutorial features dynamic HTML techniques, such as hiding, showing and moving layers; dragging objects; and windows-style drop-down menus. It also integrates interactive forms, common gateway interface (CGI), frames, and animated GIF images in the design of the WBT. After describing the design and implementation of the tutorial project, an evaluation of usage statistics and user feedback was conducted, as well as an assessment of its strengths and weaknesses, and a comparison of this tutorial with other common types of training methods. The present article describes an innovative approach for CD-ROM training using advanced Web technologies such as dynamic HTML, which can simulate and demonstrate the interactive use of the CD-ROM, as well as the actual search process of a database.
    Date
    28. 1.2006 19:21:22
  2. Xie, H.; Li, X.; Wang, T.; Lau, R.Y.K.; Wong, T.-L.; Chen, L.; Wang, F.L.; Li, Q.: Incorporating sentiment into tag-based user profiles and resource profiles for personalized search in folksonomy (2016) 0.02
    0.018563224 = product of:
      0.06497128 = sum of:
        0.02974604 = weight(_text_:processing in 2671) [ClassicSimilarity], result of:
          0.02974604 = score(doc=2671,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.17890452 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
        0.03522524 = weight(_text_:techniques in 2671) [ClassicSimilarity], result of:
          0.03522524 = score(doc=2671,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.19468555 = fieldWeight in 2671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.03125 = fieldNorm(doc=2671)
      0.2857143 = coord(2/7)
    
    Abstract
    In recent years, there has been a rapid growth of user-generated data in collaborative tagging (a.k.a. folksonomy-based) systems due to the prevailing of Web 2.0 communities. To effectively assist users to find their desired resources, it is critical to understand user behaviors and preferences. Tag-based profile techniques, which model users and resources by a vector of relevant tags, are widely employed in folksonomy-based systems. This is mainly because that personalized search and recommendations can be facilitated by measuring relevance between user profiles and resource profiles. However, conventional measurements neglect the sentiment aspect of user-generated tags. In fact, tags can be very emotional and subjective, as users usually express their perceptions and feelings about the resources by tags. Therefore, it is necessary to take sentiment relevance into account into measurements. In this paper, we present a novel generic framework SenticRank to incorporate various sentiment information to various sentiment-based information for personalized search by user profiles and resource profiles. In this framework, content-based sentiment ranking and collaborative sentiment ranking methods are proposed to obtain sentiment-based personalized ranking. To the best of our knowledge, this is the first work of integrating sentiment information to address the problem of the personalized tag-based search in collaborative tagging systems. Moreover, we compare the proposed sentiment-based personalized search with baselines in the experiments, the results of which have verified the effectiveness of the proposed framework. In addition, we study the influences by popular sentiment dictionaries, and SenticNet is the most prominent knowledge base to boost the performance of personalized search in folksonomy.
    Source
    Information processing and management. 52(2016) no.1, S.61-72
  3. Thelwall, M.; Li, X.; Barjak, F.; Robinson, S.: Assessing the international web connectivity of research groups (2008) 0.01
    0.008895717 = product of:
      0.062270015 = sum of:
        0.062270015 = weight(_text_:techniques in 1401) [ClassicSimilarity], result of:
          0.062270015 = score(doc=1401,freq=4.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.34415868 = fieldWeight in 1401, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this paper is to claim that it is useful to assess the web connectivity of research groups, describe hyperlink-based techniques to achieve this and present brief details of European life sciences research groups as a case study. Design/methodology/approach - A commercial search engine was harnessed to deliver hyperlink data via its automatic query submission interface. A special purpose link analysis tool, LexiURL, then summarised and graphed the link data in appropriate ways. Findings - Webometrics can provide a wide range of descriptive information about the international connectivity of research groups. Research limitations/implications - Only one field was analysed, data was taken from only one search engine, and the results were not validated. Practical implications - Web connectivity seems to be particularly important for attracting overseas job applicants and to promote research achievements and capabilities, and hence we contend that it can be useful for national and international governments to use webometrics to ensure that the web is being used effectively by research groups. Originality/value - This is the first paper to make a case for the value of using a range of webometric techniques to evaluate the web presences of research groups within a field, and possibly the first "applied" webometrics study produced for an external contract.
  4. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.01
    0.006290222 = product of:
      0.044031553 = sum of:
        0.044031553 = weight(_text_:techniques in 5505) [ClassicSimilarity], result of:
          0.044031553 = score(doc=5505,freq=2.0), product of:
            0.18093403 = queryWeight, product of:
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.04107254 = queryNorm
            0.24335694 = fieldWeight in 5505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.405231 = idf(docFreq=1467, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
      0.14285715 = coord(1/7)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
  5. Li, X.: ¬A new robust relevance model in the language model framework (2008) 0.01
    0.005311793 = product of:
      0.03718255 = sum of:
        0.03718255 = weight(_text_:processing in 2076) [ClassicSimilarity], result of:
          0.03718255 = score(doc=2076,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.22363065 = fieldWeight in 2076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2076)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 44(2008) no.3, S.991-1007
  6. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.01
    0.005311793 = product of:
      0.03718255 = sum of:
        0.03718255 = weight(_text_:processing in 5045) [ClassicSimilarity], result of:
          0.03718255 = score(doc=5045,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.22363065 = fieldWeight in 5045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5045)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 54(2018) no.6, S.1345-1358
  7. Li, X.; Rijke, M.de: Characterizing and predicting downloads in academic search (2019) 0.01
    0.005311793 = product of:
      0.03718255 = sum of:
        0.03718255 = weight(_text_:processing in 5103) [ClassicSimilarity], result of:
          0.03718255 = score(doc=5103,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.22363065 = fieldWeight in 5103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5103)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 56(2019) no.3, S.394-407
  8. Li, X.; Schijvenaars, B.J.A.; Rijke, M.de: Investigating queries and search failures in academic search (2017) 0.00
    0.0042494345 = product of:
      0.02974604 = sum of:
        0.02974604 = weight(_text_:processing in 5033) [ClassicSimilarity], result of:
          0.02974604 = score(doc=5033,freq=2.0), product of:
            0.1662677 = queryWeight, product of:
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.04107254 = queryNorm
            0.17890452 = fieldWeight in 5033, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.048147 = idf(docFreq=2097, maxDocs=44218)
              0.03125 = fieldNorm(doc=5033)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 53(2017) no.3, S.666-683
  9. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.00
    0.0019874151 = product of:
      0.013911906 = sum of:
        0.013911906 = product of:
          0.027823811 = sum of:
            0.027823811 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
              0.027823811 = score(doc=2593,freq=2.0), product of:
                0.14382903 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04107254 = queryNorm
                0.19345059 = fieldWeight in 2593, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2593)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    20. 1.2015 18:30:22