Search (10 results, page 1 of 1)

  • × author_ss:"Li, X."
  1. Lu, W.; Li, X.; Liu, Z.; Cheng, Q.: How do author-selected keywords function semantically in scientific manuscripts? (2019) 0.02
    0.021718726 = product of:
      0.0868749 = sum of:
        0.049294014 = weight(_text_:studies in 5453) [ClassicSimilarity], result of:
          0.049294014 = score(doc=5453,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 5453, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5453)
        0.03758089 = product of:
          0.07516178 = sum of:
            0.07516178 = weight(_text_:area in 5453) [ClassicSimilarity], result of:
              0.07516178 = score(doc=5453,freq=4.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.38494104 = fieldWeight in 5453, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5453)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Author-selected keywords have been widely utilized for indexing, information retrieval, bibliometrics and knowledge organization in previous studies. However, few studies exist con-cerning how author-selected keywords function semantically in scientific manuscripts. In this paper, we investigated this problem from the perspective of term function (TF) by devising indica-tors of the diversity and symmetry of keyword term functions in papers, as well as the intensity of individual term functions in papers. The data obtained from the whole Journal of Informetrics(JOI) were manually processed by an annotation scheme of key-word term functions, including "research topic," "research method," "research object," "research area," "data" and "others," based on empirical work in content analysis. The results show, quantitatively, that the diversity of keyword term function de-creases, and the irregularity increases with the number of author-selected keywords in a paper. Moreover, the distribution of the intensity of individual keyword term function indicated that no significant difference exists between the ranking of the five term functions with the increase of the number of author-selected keywords (i.e., "research topic" > "research method" > "research object" > "research area" > "data"). The findings indicate that precise keyword related research must take into account the dis-tinct types of author-selected keywords.
  2. Li, X.: Designing an interactive Web tutorial with cross-browser dynamic HTML (2000) 0.01
    0.01111404 = product of:
      0.04445616 = sum of:
        0.02834915 = weight(_text_:libraries in 4897) [ClassicSimilarity], result of:
          0.02834915 = score(doc=4897,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=4897)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 4897) [ClassicSimilarity], result of:
              0.03221402 = score(doc=4897,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 4897, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4897)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    Texas A&M University Libraries developed a Web-based training (WBT) application for LandView III, a federal depository CD-ROM publication using cross-browser dynamic HTML (DHTML) and other Web technologies. The interactive and self-paced tutorial demonstrates the major features of the CD-ROM and shows how to navigate the programs. The tutorial features dynamic HTML techniques, such as hiding, showing and moving layers; dragging objects; and windows-style drop-down menus. It also integrates interactive forms, common gateway interface (CGI), frames, and animated GIF images in the design of the WBT. After describing the design and implementation of the tutorial project, an evaluation of usage statistics and user feedback was conducted, as well as an assessment of its strengths and weaknesses, and a comparison of this tutorial with other common types of training methods. The present article describes an innovative approach for CD-ROM training using advanced Web technologies such as dynamic HTML, which can simulate and demonstrate the interactive use of the CD-ROM, as well as the actual search process of a database.
    Date
    28. 1.2006 19:21:22
  3. Li, X.; Thelwall, M.; Kousha, K.: ¬The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication (2015) 0.01
    0.009999052 = product of:
      0.07999241 = sum of:
        0.07999241 = sum of:
          0.0531474 = weight(_text_:area in 2593) [ClassicSimilarity], result of:
            0.0531474 = score(doc=2593,freq=2.0), product of:
              0.1952553 = queryWeight, product of:
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.03962768 = queryNorm
              0.27219442 = fieldWeight in 2593, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.927245 = idf(docFreq=870, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2593)
          0.026845016 = weight(_text_:22 in 2593) [ClassicSimilarity], result of:
            0.026845016 = score(doc=2593,freq=2.0), product of:
              0.13876937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03962768 = queryNorm
              0.19345059 = fieldWeight in 2593, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2593)
      0.125 = coord(1/8)
    
    Abstract
    Purpose The four major Subject Repositories (SRs), arXiv, Research Papers in Economics (RePEc), Social Science Research Network (SSRN) and PubMed Central (PMC), are all important within their disciplines but no previous study has systematically compared how often they are cited in academic publications. In response, the purpose of this paper is to report an analysis of citations to SRs from Scopus publications, 2000-2013. Design/methodology/approach Scopus searches were used to count the number of documents citing the four SRs in each year. A random sample of 384 documents citing the four SRs was then visited to investigate the nature of the citations. Findings Each SR was most cited within its own subject area but attracted substantial citations from other subject areas, suggesting that they are open to interdisciplinary uses. The proportion of documents citing each SR is continuing to increase rapidly, and the SRs all seem to attract substantial numbers of citations from more than one discipline. Research limitations/implications Scopus does not cover all publications, and most citations to documents found in the four SRs presumably cite the published version, when one exists, rather than the repository version. Practical implications SRs are continuing to grow and do not seem to be threatened by institutional repositories and so research managers should encourage their continued use within their core disciplines, including for research that aims at an audience in other disciplines. Originality/value This is the first simultaneous analysis of Scopus citations to the four most popular SRs.
    Date
    20. 1.2015 18:30:22
  4. Thelwall, M.; Li, X.; Barjak, F.; Robinson, S.: Assessing the international web connectivity of research groups (2008) 0.01
    0.007479902 = product of:
      0.059839215 = sum of:
        0.059839215 = weight(_text_:case in 1401) [ClassicSimilarity], result of:
          0.059839215 = score(doc=1401,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34346986 = fieldWeight in 1401, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1401)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - The purpose of this paper is to claim that it is useful to assess the web connectivity of research groups, describe hyperlink-based techniques to achieve this and present brief details of European life sciences research groups as a case study. Design/methodology/approach - A commercial search engine was harnessed to deliver hyperlink data via its automatic query submission interface. A special purpose link analysis tool, LexiURL, then summarised and graphed the link data in appropriate ways. Findings - Webometrics can provide a wide range of descriptive information about the international connectivity of research groups. Research limitations/implications - Only one field was analysed, data was taken from only one search engine, and the results were not validated. Practical implications - Web connectivity seems to be particularly important for attracting overseas job applicants and to promote research achievements and capabilities, and hence we contend that it can be useful for national and international governments to use webometrics to ensure that the web is being used effectively by research groups. Originality/value - This is the first paper to make a case for the value of using a range of webometric techniques to evaluate the web presences of research groups within a field, and possibly the first "applied" webometrics study produced for an external contract.
  5. Yan, X.; Li, X.; Song, D.: ¬A correlation analysis on LSA and HAL semantic space models (2004) 0.01
    0.00690405 = product of:
      0.0552324 = sum of:
        0.0552324 = product of:
          0.1104648 = sum of:
            0.1104648 = weight(_text_:area in 2152) [ClassicSimilarity], result of:
              0.1104648 = score(doc=2152,freq=6.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.5657455 = fieldWeight in 2152, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2152)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, we compare a well-known semantic spacemodel, Latent Semantic Analysis (LSA) with another model, Hyperspace Analogue to Language (HAL) which is widely used in different area, especially in automatic query refinement. We conduct this comparative analysis to prove our hypothesis that with respect to ability of extracting the lexical information from a corpus of text, LSA is quite similar to HAL. We regard HAL and LSA as black boxes. Through a Pearson's correlation analysis to the outputs of these two black boxes, we conclude that LSA highly co-relates with HAL and thus there is a justification that LSA and HAL can potentially play a similar role in the area of facilitating automatic query refinement. This paper evaluates LSA in a new application area and contributes an effective way to compare different semantic space models.
  6. Li, X.; Rijke, M.de: Characterizing and predicting downloads in academic search (2019) 0.01
    0.0061617517 = product of:
      0.049294014 = sum of:
        0.049294014 = weight(_text_:studies in 5103) [ClassicSimilarity], result of:
          0.049294014 = score(doc=5103,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 5103, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5103)
      0.125 = coord(1/8)
    
    Abstract
    Numerous studies have been conducted on the information interaction behavior of search engine users. Few studies have considered information interactions in the domain of academic search. We focus on conversion behavior in this domain. Conversions have been widely studied in the e-commerce domain, e.g., for online shopping and hotel booking, but little is known about conversions in academic search. We start with a description of a unique dataset of a particular type of conversion in academic search, viz. users' downloads of scientific papers. Then we move to an observational analysis of users' download actions. We first characterize user actions and show their statistics in sessions. Then we focus on behavioral and topical aspects of downloads, revealing behavioral correlations across download sessions. We discover unique properties that differ from other conversion settings such as online shopping. Using insights gained from these observations, we consider the task of predicting the next download. In particular, we focus on predicting the time until the next download session, and on predicting the number of downloads. We cast these as time series prediction problems and model them using LSTMs. We develop a specialized model built on user segmentations that achieves significant improvements over the state-of-the art.
  7. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.01
    0.00522842 = product of:
      0.04182736 = sum of:
        0.04182736 = weight(_text_:studies in 1611) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1611,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
      0.125 = coord(1/8)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
  8. Li, X.; Fullerton, J.P.: Create, edit, and manage Web database content using active server pages (2002) 0.01
    0.005011469 = product of:
      0.040091753 = sum of:
        0.040091753 = weight(_text_:libraries in 4793) [ClassicSimilarity], result of:
          0.040091753 = score(doc=4793,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.30797386 = fieldWeight in 4793, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=4793)
      0.125 = coord(1/8)
    
    Abstract
    Libraries have been integrating active server pages (ASP) with Web-based databases for searching and retrieving electronic information for the past five years; however, a literature review reveals that a more complete description of modifying data through the Web interface is needed. At the Texas A&M University Libraries, a Web database of Internet links was developed using ASP, Microsoft Access, and Microsoft Internet Information Server (IIS) to facilitate use of online resources. The implementation of the Internet Links database is described with focus on its data management functions. Also described are other library applications of ASP technology. The project explores a more complete approach to library Web database applications than was found in the current literature and should serve to facilitate reference service.
  9. Wang, P.; Li, X.: Assessing the quality of information on Wikipedia : a deep-learning approach (2020) 0.00
    0.0043570166 = product of:
      0.034856133 = sum of:
        0.034856133 = weight(_text_:studies in 5505) [ClassicSimilarity], result of:
          0.034856133 = score(doc=5505,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 5505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5505)
      0.125 = coord(1/8)
    
    Abstract
    Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state-of-the-art deep-learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short-term memory (LSTMs) network, CNN-LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep-learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.
  10. Li, X.: Young people's information practices in library makerspaces (2021) 0.00
    0.0043570166 = product of:
      0.034856133 = sum of:
        0.034856133 = weight(_text_:studies in 245) [ClassicSimilarity], result of:
          0.034856133 = score(doc=245,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=245)
      0.125 = coord(1/8)
    
    Abstract
    While there have been a growing number of studies on makerspaces in different disciplines, little is known about how young people interact with information in makerspaces. This study aimed to unpack how young people (middle and high schoolers) sought, used, and shared information in voluntary free-choice library makerspace activities. Qualitative methods, including individual interviews, observations, photovoice, and focus groups, were used to elicit 21 participants' experiences at two library makerspaces. The findings showed that young people engaged in dynamic practices of information seeking, use, and sharing, and revealed how the historical, sociocultural, material, and technological contexts embedded in makerspace activities shaped these information practices. Information practices of tinkering, sensing, and imagining in makerspaces were highlighted. Various criteria that young people used in evaluating human sources and online information were identified as well. The study also demonstrated the communicative and collaborative aspects of young people's information practices through information sharing. The findings extended Savolainen's everyday information practices model and addressed the gap in the current literature on young people's information behavior and information practices. Understanding how young people interact with information in makerspaces can help makerspace facilitators and information professionals better support youth services and facilitate makerspace activities.