Search (12 results, page 1 of 1)

  • × author_ss:"Liu, X."
  1. Chen, M.; Liu, X.; Qin, J.: Semantic relation extraction from socially-generated tags : a methodology for metadata generation (2008) 0.03
    0.03157078 = product of:
      0.07892695 = sum of:
        0.040348392 = weight(_text_:context in 2648) [ClassicSimilarity], result of:
          0.040348392 = score(doc=2648,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 2648, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2648)
        0.038578555 = product of:
          0.057867832 = sum of:
            0.029064644 = weight(_text_:29 in 2648) [ClassicSimilarity], result of:
              0.029064644 = score(doc=2648,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
            0.028803186 = weight(_text_:22 in 2648) [ClassicSimilarity], result of:
              0.028803186 = score(doc=2648,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 2648, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2648)
          0.6666667 = coord(2/3)
      0.4 = coord(2/5)
    
    Abstract
    The growing predominance of social semantics in the form of tagging presents the metadata community with both opportunities and challenges as for leveraging this new form of information content representation and for retrieval. One key challenge is the absence of contextual information associated with these tags. This paper presents an experiment working with Flickr tags as an example of utilizing social semantics sources for enriching subject metadata. The procedure included four steps: 1) Collecting a sample of Flickr tags, 2) Calculating cooccurrences between tags through mutual information, 3) Tracing contextual information of tag pairs via Google search results, 4) Applying natural language processing and machine learning techniques to extract semantic relations between tags. The experiment helped us to build a context sentence collection from the Google search results, which was then processed by natural language processing and machine learning algorithms. This new approach achieved a reasonably good rate of accuracy in assigning semantic relations to tag pairs. This paper also explores the implications of this approach for using social semantics to enrich subject metadata.
    Date
    20. 2.2009 10:29:07
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Liu, X.: ¬The standardization of Chinese library classification (1993) 0.03
    0.027017955 = product of:
      0.067544885 = sum of:
        0.055919025 = weight(_text_:system in 5588) [ClassicSimilarity], result of:
          0.055919025 = score(doc=5588,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 5588, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=5588)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 5588) [ClassicSimilarity], result of:
              0.034877572 = score(doc=5588,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 5588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5588)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The standardization of Chinese materials classification was first proposed in the late-1970s in China. In December 1980, the CCDST, the Chinese Library Association and the Chinese Society for Information Science proposed that the Chinese Library Classification system be adopted as national standard. This marked the beginning of the standardization of Chinese materials classification. Later on, there were many conferences and workshops held and four draft national standards were discussed, those for the Chinese Library Classification systems, the Materials Classification System, the Rules for Thesaurus and Subject Headings, and the rules for Materials Classifying Color Recognition. This article gives a brief review on the historical development of the standardization on Chinese Library Classification. It also discusses its effects on automation, networking and resources sharing and the feasibility of adopting Chinese Library Classification as a National Standard. In addition, the main content of the standardization of materials classification, use of the national standard classification system and variations under the standard system are covered in this article
    Date
    8.10.2000 14:29:26
  3. Zhang, C.; Liu, X.; Xu, Y.(C.); Wang, Y.: Quality-structure index : a new metric to measure scientific journal influence (2011) 0.02
    0.017940167 = product of:
      0.08970083 = sum of:
        0.08970083 = weight(_text_:index in 4366) [ClassicSimilarity], result of:
          0.08970083 = score(doc=4366,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.48279524 = fieldWeight in 4366, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4366)
      0.2 = coord(1/5)
    
    Abstract
    An innovative model to measure the influence among scientific journals is developed in this study. This model is based on the path analysis of a journal citation network, and its output is a journal influence matrix that describes the directed influence among all journals. Based on this model, an index of journals' overall influence, the quality-structure index (QSI), is derived. Journal ranking based on QSI has the advantage of accounting for both intrinsic journal quality and the structural position of a journal in a citation network. The QSI also integrates the characteristics of two prevailing streams of journal-assessment measures: those based on bibliometric statistics to approximate intrinsic journal quality, such as the Journal Impact Factor, and those using a journal's structural position based on the PageRank-type of algorithm, such as the Eigenfactor score. Empirical results support our finding that the new index is significantly closer to scholars' subjective perception of journal influence than are the two aforementioned measures. In addition, the journal influence matrix offers a new way to measure two-way influences between any two academic journals, hence establishing a theoretical basis for future scientometrics studies to investigate the knowledge flow within and across research disciplines.
  4. Liu, X.; Bu, Y.; Li, M.; Li, J.: Monodisciplinary collaboration disrupts science more than multidisciplinary collaboration (2024) 0.02
    0.015222736 = product of:
      0.07611368 = sum of:
        0.07611368 = weight(_text_:index in 1202) [ClassicSimilarity], result of:
          0.07611368 = score(doc=1202,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.40966535 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.2 = coord(1/5)
    
    Abstract
    Collaboration across disciplines is a critical form of scientific collaboration to solve complex problems and make innovative contributions. This study focuses on the association between multidisciplinary collaboration measured by coauthorship in publications and the disruption of publications measured by the Disruption (D) index. We used authors' affiliations as a proxy of the disciplines to which they belong and categorized an article into multidisciplinary collaboration or monodisciplinary collaboration. The D index quantifies the extent to which a study disrupts its predecessors. We selected 13 journals that publish articles in six disciplines from the Microsoft Academic Graph (MAG) database and then constructed regression models with fixed effects and estimated the relationship between the variables. The findings show that articles with monodisciplinary collaboration are more disruptive than those with multidisciplinary collaboration. Furthermore, we uncovered the mechanism of how monodisciplinary collaboration disrupts science more than multidisciplinary collaboration by exploring the references of the sampled publications.
  5. Liu, X.; Zheng, W.; Fang, H.: ¬An exploration of ranking models and feedback method for related entity finding (2013) 0.01
    0.008069678 = product of:
      0.040348392 = sum of:
        0.040348392 = weight(_text_:context in 2714) [ClassicSimilarity], result of:
          0.040348392 = score(doc=2714,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 2714, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2714)
      0.2 = coord(1/5)
    
    Abstract
    Most existing search engines focus on document retrieval. However, information needs are certainly not limited to finding relevant documents. Instead, a user may want to find relevant entities such as persons and organizations. In this paper, we study the problem of related entity finding. Our goal is to rank entities based on their relevance to a structured query, which specifies an input entity, the type of related entities and the relation between the input and related entities. We first discuss a general probabilistic framework, derive six possible retrieval models to rank the related entities, and then compare these models both analytically and empirically. To further improve performance, we study the problem of feedback in the context of related entity finding. Specifically, we propose a mixture model based feedback method that can utilize the pseudo feedback entities to estimate an enriched model for the relation between the input and related entities. Experimental results over two standard TREC collections show that the derived relation generation model combined with a relation feedback method performs better than other models.
  6. Liu, X.; Guo, C.; Zhang, L.: Scholar metadata and knowledge generation with human and artificial intelligence (2014) 0.01
    0.007908144 = product of:
      0.03954072 = sum of:
        0.03954072 = weight(_text_:system in 1287) [ClassicSimilarity], result of:
          0.03954072 = score(doc=1287,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 1287, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1287)
      0.2 = coord(1/5)
    
    Abstract
    Scholar metadata have traditionally centered on descriptive representations, which have been used as a foundation for scholarly publication repositories and academic information retrieval systems. In this article, we propose innovative and economic methods of generating knowledge-based structural metadata (structural keywords) using a combination of natural language processing-based machine-learning techniques and human intelligence. By allowing low-barrier participation through a social media system, scholars (both as authors and users) can participate in the metadata editing and enhancing process and benefit from more accurate and effective information retrieval. Our experimental web system ScholarWiki uses machine learning techniques, which automatically produce increasingly refined metadata by learning from the structural metadata contributed by scholars. The cumulated structural metadata add intelligence and automatically enhance and update recursively the quality of metadata, wiki pages, and the machine-learning model.
  7. Liu, X.; Qin, J.: ¬An interactive metadata model for structural, descriptive, and referential representation of scholarly output (2014) 0.01
    0.0065901205 = product of:
      0.032950602 = sum of:
        0.032950602 = weight(_text_:system in 1253) [ClassicSimilarity], result of:
          0.032950602 = score(doc=1253,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 1253, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1253)
      0.2 = coord(1/5)
    
    Abstract
    The scientific metadata model proposed in this article encompasses both classical descriptive metadata such as those defined in the Dublin Core Metadata Element Set (DC) and the innovative structural and referential metadata properties that go beyond the classical model. Structural metadata capture the structural vocabulary in research publications; referential metadata include not only citations but also data about other types of scholarly output that is based on or related to the same publication. The article describes the structural, descriptive, and referential (SDR) elements of the metadata model and explains the underlying assumptions and justifications for each major component in the model. ScholarWiki, an experimental system developed as a proof of concept, was built over the wiki platform to allow user interaction with the metadata and the editing, deleting, and adding of metadata. By allowing and encouraging scholars (both as authors and as users) to participate in the knowledge and metadata editing and enhancing process, the larger community will benefit from more accurate and effective information retrieval. The ScholarWiki system utilizes machine-learning techniques that can automatically produce self-enhanced metadata by learning from the structural metadata that scholars contribute, which will add intelligence to enhance and update automatically the publication of metadata Wiki pages.
  8. Liu, X.; Jia, H.: Answering academic questions for education by recommending cyberlearning resources (2013) 0.01
    0.0055919024 = product of:
      0.027959513 = sum of:
        0.027959513 = weight(_text_:system in 1012) [ClassicSimilarity], result of:
          0.027959513 = score(doc=1012,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 1012, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=1012)
      0.2 = coord(1/5)
    
    Abstract
    In this study, we design an innovative method for answering students' or scholars' academic questions (for a specific scientific publication) by automatically recommending e-learning resources in a cyber-infrastructure-enabled learning environment to enhance the learning experiences of students and scholars. By using information retrieval and metasearch methodologies, different types of referential metadata (related Wikipedia pages, data sets, source code, video lectures, presentation slides, and online tutorials) for an assortment of publications and scientific topics will be automatically retrieved, associated, and ranked (via the language model and the inference network model) to provide easily understandable cyberlearning resources to answer students' questions. We also designed an experimental system to automatically answer students' questions for a specific academic publication and then evaluated the quality of the answers (the recommended resources) using mean reciprocal rank and normalized discounted cumulative gain. After examining preliminary evaluation results and student feedback, we found that cyberlearning resources can provide high-quality and straightforward answers for students' and scholars' questions concerning the content of academic publications.
  9. Chen, Z.; Huang, Y.; Tian, J.; Liu, X.; Fu, K.; Huang, T.: Joint model for subsentence-level sentiment analysis with Markov logic (2015) 0.00
    0.0046599186 = product of:
      0.023299592 = sum of:
        0.023299592 = weight(_text_:system in 2210) [ClassicSimilarity], result of:
          0.023299592 = score(doc=2210,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 2210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2210)
      0.2 = coord(1/5)
    
    Abstract
    Sentiment analysis mainly focuses on the study of one's opinions that express positive or negative sentiments. With the explosive growth of web documents, sentiment analysis is becoming a hot topic in both academic research and system design. Fine-grained sentiment analysis is traditionally solved as a 2-step strategy, which results in cascade errors. Although joint models, such as joint sentiment/topic and maximum entropy (MaxEnt)/latent Dirichlet allocation, are proposed to tackle this problem of sentiment analysis, they focus on the joint learning of both aspects and sentiments. Thus, they are not appropriate to solve the cascade errors for sentiment analysis at the sentence or subsentence level. In this article, we present a novel jointly fine-grained sentiment analysis framework at the subsentence level with Markov logic. First, we divide the task into 2 separate stages (subjectivity classification and polarity classification). Then, the 2 separate stages are processed, respectively, with different feature sets, which are implemented by local formulas in Markov logic. Finally, global formulas in Markov logic are adopted to realize the interactions of the 2 separate stages. The joint inference of subjectivity and polarity helps prevent cascade errors. Experiments on a Chinese sentiment data set manifest that our joint model brings significant improvements.
  10. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.00
    0.0023251716 = product of:
      0.011625858 = sum of:
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 3464) [ClassicSimilarity], result of:
              0.034877572 = score(doc=3464,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    1. 6.2010 9:29:57
  11. Clewley, N.; Chen, S.Y.; Liu, X.: Cognitive styles and search engine preferences : field dependence/independence vs holism/serialism (2010) 0.00
    0.001937643 = product of:
      0.009688215 = sum of:
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 3961) [ClassicSimilarity], result of:
              0.029064644 = score(doc=3961,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 3961, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3961)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 8.2010 13:11:47
  12. Jiang, Z.; Liu, X.; Chen, Y.: Recovering uncaptured citations in a scholarly network : a two-step citation analysis to estimate publication importance (2016) 0.00
    0.001937643 = product of:
      0.009688215 = sum of:
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 3018) [ClassicSimilarity], result of:
              0.029064644 = score(doc=3018,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 3018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3018)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    12. 6.2016 20:31:29