Search (8 results, page 1 of 1)

  • × author_ss:"Li, J."
  • × language_ss:"e"
  • × type_ss:"a"
  1. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.02
    0.022750946 = product of:
      0.1592566 = sum of:
        0.1592566 = sum of:
          0.12017425 = weight(_text_:asia in 2590) [ClassicSimilarity], result of:
            0.12017425 = score(doc=2590,freq=2.0), product of:
              0.29789865 = queryWeight, product of:
                7.3024383 = idf(docFreq=80, maxDocs=44218)
                0.04079441 = queryNorm
              0.4034065 = fieldWeight in 2590, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.3024383 = idf(docFreq=80, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
          0.039082356 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
            0.039082356 = score(doc=2590,freq=4.0), product of:
              0.14285508 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04079441 = queryNorm
              0.27358043 = fieldWeight in 2590, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this paper is to analyse the global scientific outputs of ontology research, an important emerging discipline that has huge potential to improve information understanding, organization, and management. Design/methodology/approach - This study collected literature published during 1900-2012 from the Web of Science database. The bibliometric analysis was performed from authorial, institutional, national, spatiotemporal, and topical aspects. Basic statistical analysis, visualization of geographic distribution, co-word analysis, and a new index were applied to the selected data. Findings - Characteristics of publication outputs suggested that ontology research has entered into the soaring stage, along with increased participation and collaboration. The authors identified the leading authors, institutions, nations, and articles in ontology research. Authors were more from North America, Europe, and East Asia. The USA took the lead, while China grew fastest. Four major categories of frequently used keywords were identified: applications in Semantic Web, applications in bioinformatics, philosophy theories, and common supporting technology. Semantic Web research played a core role, and gene ontology study was well-developed. The study focus of ontology has shifted from philosophy to information science. Originality/value - This is the first study to quantify global research patterns and trends in ontology, which might provide a potential guide for the future research. The new index provides an alternative way to evaluate the multidisciplinary influence of researchers.
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  2. Li, J.; Zhang, P.; Song, D.; Wu, Y.: Understanding an enriched multidimensional user relevance model by analyzing query logs (2017) 0.01
    0.008699203 = product of:
      0.06089442 = sum of:
        0.06089442 = weight(_text_:studies in 3961) [ClassicSimilarity], result of:
          0.06089442 = score(doc=3961,freq=4.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.37408823 = fieldWeight in 3961, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3961)
      0.14285715 = coord(1/7)
    
    Abstract
    Modeling multidimensional relevance in information retrieval (IR) has attracted much attention in recent years. However, most existing studies are conducted through relatively small-scale user studies, which may not reflect a real-world and natural search scenario. In this article, we propose to study the multidimensional user relevance model (MURM) on large scale query logs, which record users' various search behaviors (e.g., query reformulations, clicks and dwelling time, etc.) in natural search settings. We advance an existing MURM model (including five dimensions: topicality, novelty, reliability, understandability, and scope) by providing two additional dimensions, that is, interest and habit. The two new dimensions represent personalized relevance judgment on retrieved documents. Further, for each dimension in the enriched MURM model, a set of computable features are formulated. By conducting extensive document ranking experiments on Bing's query logs and TREC session Track data, we systematically investigated the impact of each dimension on retrieval performance and gained a series of insightful findings which may bring benefits for the design of future IR systems.
  3. Li, J.; Zhang, Z.; Li, X.; Chen, H.: Kernel-based learning for biomedical relation extraction (2008) 0.01
    0.006151265 = product of:
      0.043058854 = sum of:
        0.043058854 = weight(_text_:studies in 1611) [ClassicSimilarity], result of:
          0.043058854 = score(doc=1611,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 1611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1611)
      0.14285715 = coord(1/7)
    
    Abstract
    Relation extraction is the process of scanning text for relationships between named entities. Recently, significant studies have focused on automatically extracting relations from biomedical corpora. Most existing biomedical relation extractors require manual creation of biomedical lexicons or parsing templates based on domain knowledge. In this study, we propose to use kernel-based learning methods to automatically extract biomedical relations from literature text. We develop a framework of kernel-based learning for biomedical relation extraction. In particular, we modified the standard tree kernel function by incorporating a trace kernel to capture richer contextual information. In our experiments on a biomedical corpus, we compare different kernel functions for biomedical relation detection and classification. The experimental results show that a tree kernel outperforms word and sequence kernels for relation detection, our trace-tree kernel outperforms the standard tree kernel, and a composite kernel outperforms individual kernels for relation extraction.
  4. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.01
    0.006151265 = product of:
      0.043058854 = sum of:
        0.043058854 = weight(_text_:studies in 3296) [ClassicSimilarity], result of:
          0.043058854 = score(doc=3296,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.26452032 = fieldWeight in 3296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3296)
      0.14285715 = coord(1/7)
    
    Abstract
    User-generated content on the Web has become an extremely valuable source for mining and analyzing user opinions on any topic. Recent years have seen an increasing body of work investigating methods to recognize favorable and unfavorable sentiments toward specific subjects from online text. However, most of these efforts focus on English and there have been very few studies on sentiment analysis of Chinese content. This paper aims to address the unique challenges posed by Chinese sentiment analysis. We propose a rule-based approach including two phases: (1) determining each sentence's sentiment based on word dependency, and (2) aggregating sentences to predict the document sentiment. We report the results of an experimental study comparing our approach with three machine learning-based approaches using two sets of Chinese articles. These results illustrate the effectiveness of our proposed method and its advantages against learning-based approaches.
  5. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.01
    0.0051260544 = product of:
      0.03588238 = sum of:
        0.03588238 = weight(_text_:studies in 5816) [ClassicSimilarity], result of:
          0.03588238 = score(doc=5816,freq=2.0), product of:
            0.1627809 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.04079441 = queryNorm
            0.22043361 = fieldWeight in 5816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5816)
      0.14285715 = coord(1/7)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
  6. Lin, X.; Li, J.; Zhou, X.: Theme creation for digital collections (2008) 0.00
    0.00276354 = product of:
      0.019344779 = sum of:
        0.019344779 = product of:
          0.038689557 = sum of:
            0.038689557 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
              0.038689557 = score(doc=2635,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.2708308 = fieldWeight in 2635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2635)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  7. Li, J.; Shi, D.: Sleeping beauties in genius work : when were they awakened? (2016) 0.00
    0.0023687482 = product of:
      0.016581237 = sum of:
        0.016581237 = product of:
          0.033162475 = sum of:
            0.033162475 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
              0.033162475 = score(doc=2647,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.23214069 = fieldWeight in 2647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2647)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 1.2016 14:13:32
  8. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.00
    0.0019739573 = product of:
      0.0138177 = sum of:
        0.0138177 = product of:
          0.0276354 = sum of:
            0.0276354 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.0276354 = score(doc=5276,freq=2.0), product of:
                0.14285508 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04079441 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 16:14:37