Search (10 results, page 1 of 1)

  • × author_ss:"Li, J."
  1. Li, J.; Shi, D.: Sleeping beauties in genius work : when were they awakened? (2016) 0.03
    0.031905405 = product of:
      0.06381081 = sum of:
        0.06381081 = sum of:
          0.021194918 = weight(_text_:2 in 2647) [ClassicSimilarity], result of:
            0.021194918 = score(doc=2647,freq=2.0), product of:
              0.1294644 = queryWeight, product of:
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.05242341 = queryNorm
              0.16371232 = fieldWeight in 2647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4695914 = idf(docFreq=10170, maxDocs=44218)
                0.046875 = fieldNorm(doc=2647)
          0.04261589 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
            0.04261589 = score(doc=2647,freq=2.0), product of:
              0.18357785 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05242341 = queryNorm
              0.23214069 = fieldWeight in 2647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2647)
      0.5 = coord(1/2)
    
    Date
    22. 1.2016 14:13:32
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.2, S.432-440
  2. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.01
    0.012555826 = product of:
      0.025111653 = sum of:
        0.025111653 = product of:
          0.050223306 = sum of:
            0.050223306 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
              0.050223306 = score(doc=2590,freq=4.0), product of:
                0.18357785 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05242341 = queryNorm
                0.27358043 = fieldWeight in 2590, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2590)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  3. Lin, X.; Li, J.; Zhou, X.: Theme creation for digital collections (2008) 0.01
    0.012429634 = product of:
      0.024859268 = sum of:
        0.024859268 = product of:
          0.049718536 = sum of:
            0.049718536 = weight(_text_:22 in 2635) [ClassicSimilarity], result of:
              0.049718536 = score(doc=2635,freq=2.0), product of:
                0.18357785 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05242341 = queryNorm
                0.2708308 = fieldWeight in 2635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2635)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Zheng, R.; Li, J.; Chen, H.; Huang, Z.: ¬A framework for authorship identification of online messages : writing-style features and classification techniques (2006) 0.01
    0.00887831 = product of:
      0.01775662 = sum of:
        0.01775662 = product of:
          0.03551324 = sum of:
            0.03551324 = weight(_text_:22 in 5276) [ClassicSimilarity], result of:
              0.03551324 = score(doc=5276,freq=2.0), product of:
                0.18357785 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05242341 = queryNorm
                0.19345059 = fieldWeight in 5276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:14:37
  5. Li, J.; Wu, G.: Characteristics of reference transactions : challenges to librarian's roles (1998) 0.01
    0.008742458 = product of:
      0.017484916 = sum of:
        0.017484916 = product of:
          0.034969833 = sum of:
            0.034969833 = weight(_text_:2 in 3374) [ClassicSimilarity], result of:
              0.034969833 = score(doc=3374,freq=4.0), product of:
                0.1294644 = queryWeight, product of:
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.05242341 = queryNorm
                0.27011156 = fieldWeight in 3374, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3374)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Reports results of a study to analyze the nature of reference services and reference desk transactions. 2 reference librarians, one from South Alabama University, Biomedical Library and the other from the Shiffman Medical Library, Wayne State University, Michigan, recorded reference transactions while they staffed the reference desks at their respective institutions from May to October 1996. 2 types of data were collected; types of tools or sources used to provide answers to reference queries; and instruction provided, from the reference desk, on different types of application
  6. Zhang, C.; Zeng, D.; Li, J.; Wang, F.-Y.; Zuo, W.: Sentiment analysis of Chinese documents : from sentence to document level (2009) 0.01
    0.0074935355 = product of:
      0.014987071 = sum of:
        0.014987071 = product of:
          0.029974142 = sum of:
            0.029974142 = weight(_text_:2 in 3296) [ClassicSimilarity], result of:
              0.029974142 = score(doc=3296,freq=4.0), product of:
                0.1294644 = queryWeight, product of:
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.05242341 = queryNorm
                0.2315242 = fieldWeight in 3296, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    User-generated content on the Web has become an extremely valuable source for mining and analyzing user opinions on any topic. Recent years have seen an increasing body of work investigating methods to recognize favorable and unfavorable sentiments toward specific subjects from online text. However, most of these efforts focus on English and there have been very few studies on sentiment analysis of Chinese content. This paper aims to address the unique challenges posed by Chinese sentiment analysis. We propose a rule-based approach including two phases: (1) determining each sentence's sentiment based on word dependency, and (2) aggregating sentences to predict the document sentiment. We report the results of an experimental study comparing our approach with three machine learning-based approaches using two sets of Chinese articles. These results illustrate the effectiveness of our proposed method and its advantages against learning-based approaches.
    Date
    2. 2.2010 19:29:56
  7. Zhao, S.X.; Zhang, P.L.; Li, J.; Tan, A.M.; Ye, F.Y.: Abstracting the core subnet of weighted networks based on link strengths (2014) 0.01
    0.0074935355 = product of:
      0.014987071 = sum of:
        0.014987071 = product of:
          0.029974142 = sum of:
            0.029974142 = weight(_text_:2 in 1256) [ClassicSimilarity], result of:
              0.029974142 = score(doc=1256,freq=4.0), product of:
                0.1294644 = queryWeight, product of:
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.05242341 = queryNorm
                0.2315242 = fieldWeight in 1256, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Most measures of networks are based on the nodes, although links are also elementary units in networks and represent interesting social or physical connections. In this work we suggest an option for exploring networks, called the h-strength, with explicit focus on links and their strengths. The h-strength and its extensions can naturally simplify a complex network to a small and concise subnetwork (h-subnet) but retains the most important links with its core structure. Its applications in 2 typical information networks, the paper cocitation network of a topic (the h-index) and 5 scientific collaboration networks in the field of "water resources," suggest that h-strength and its extensions could be a useful choice for abstracting, simplifying, and visualizing a complex network. Moreover, we observe that the 2 informetric models, the Glänzel-Schubert model and the Hirsch model, roughly hold in the context of the h-strength for the collaboration networks.
  8. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.01
    0.006244613 = product of:
      0.012489226 = sum of:
        0.012489226 = product of:
          0.024978451 = sum of:
            0.024978451 = weight(_text_:2 in 5816) [ClassicSimilarity], result of:
              0.024978451 = score(doc=5816,freq=4.0), product of:
                0.1294644 = queryWeight, product of:
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.05242341 = queryNorm
                0.19293682 = fieldWeight in 5816, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5816)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Millions of messages are produced on microblog platforms every day, leading to the pressing need for automatic identification of key points from the massive texts. To absorb salient content from the vast bulk of microblog posts, this article focuses on the task of microblog keyphrase extraction. In previous work, most efforts treat messages as independent documents and might suffer from the data sparsity problem exhibited in short and informal microblog posts. On the contrary, we propose to enrich contexts via exploiting conversations initialized by target posts and formed by their replies, which are generally centered around relevant topics to the target posts and therefore helpful for keyphrase identification. Concretely, we present a neural keyphrase extraction framework, which has 2 modules: a conversation context encoder and a keyphrase tagger. The conversation context encoder captures indicative representation from their conversation contexts and feeds the representation into the keyphrase tagger, and the keyphrase tagger extracts salient words from target posts. The 2 modules were trained jointly to optimize the conversation context encoding and keyphrase extraction processes. In the conversation context encoder, we leverage hierarchical structures to capture the word-level indicative representation and message-level indicative representation hierarchically. In both of the modules, we apply character-level representations, which enables the model to explore morphological features and deal with the out-of-vocabulary problem caused by the informal language style of microblog messages. Extensive comparison results on real-life data sets indicate that our model outperforms state-of-the-art models from previous studies.
  9. Wu, S.; Li, J.; Zeng, X.; Bi, Y.: Adaptive data fusion methods in information retrieval (2014) 0.01
    0.0061818515 = product of:
      0.012363703 = sum of:
        0.012363703 = product of:
          0.024727406 = sum of:
            0.024727406 = weight(_text_:2 in 1500) [ClassicSimilarity], result of:
              0.024727406 = score(doc=1500,freq=2.0), product of:
                0.1294644 = queryWeight, product of:
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.05242341 = queryNorm
                0.19099772 = fieldWeight in 1500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1500)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Data fusion is currently used extensively in information retrieval for various tasks. It has proved to be a useful technology because it is able to improve retrieval performance frequently. However, in almost all prior research in data fusion, static search environments have been used, and dynamic search environments have generally not been considered. In this article, we investigate adaptive data fusion methods that can change their behavior when the search environment changes. Three adaptive data fusion methods are proposed and investigated. To test these proposed methods properly, we generate a benchmark from a historic Text REtrieval Conference data set. Experiments with the benchmark show that 2 of the proposed methods are good and may potentially be used in practice.
  10. Xie, Z.; Ouyang, Z.; Li, J.; Dong, E.: Modelling transition phenomena of scientific coauthorship networks (2018) 0.01
    0.0052987295 = product of:
      0.010597459 = sum of:
        0.010597459 = product of:
          0.021194918 = sum of:
            0.021194918 = weight(_text_:2 in 4043) [ClassicSimilarity], result of:
              0.021194918 = score(doc=4043,freq=2.0), product of:
                0.1294644 = queryWeight, product of:
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.05242341 = queryNorm
                0.16371232 = fieldWeight in 4043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4695914 = idf(docFreq=10170, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4043)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.2, S.305-317