Search (6 results, page 1 of 1)

  • × author_ss:"Liu, J."
  1. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.01
    0.010532449 = product of:
      0.021064898 = sum of:
        0.009444992 = product of:
          0.03777997 = sum of:
            0.03777997 = weight(_text_:learning in 1012) [ClassicSimilarity], result of:
              0.03777997 = score(doc=1012,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.24665193 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.25 = coord(1/4)
        0.011619906 = product of:
          0.023239812 = sum of:
            0.023239812 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.023239812 = score(doc=1012,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  2. Liu, J.: CIP in China : the development and status quo (1996) 0.00
    0.0034859716 = product of:
      0.013943886 = sum of:
        0.013943886 = product of:
          0.027887773 = sum of:
            0.027887773 = weight(_text_:22 in 5528) [ClassicSimilarity], result of:
              0.027887773 = score(doc=5528,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.23214069 = fieldWeight in 5528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5528)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.1, S.69-76
  3. Zhou, D.; Lawless, S.; Wu, X.; Zhao, W.; Liu, J.: ¬A study of user profile representation for personalized cross-language information retrieval (2016) 0.00
    0.0029049765 = product of:
      0.011619906 = sum of:
        0.011619906 = product of:
          0.023239812 = sum of:
            0.023239812 = weight(_text_:22 in 3167) [ClassicSimilarity], result of:
              0.023239812 = score(doc=3167,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19345059 = fieldWeight in 3167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3167)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22
  4. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.00
    0.0029049765 = product of:
      0.011619906 = sum of:
        0.011619906 = product of:
          0.023239812 = sum of:
            0.023239812 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.023239812 = score(doc=993,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2023 18:18:34
  5. Jiang, X.; Liu, J.: Extracting the evolutionary backbone of scientific domains : the semantic main path network analysis approach based on citation context analysis (2023) 0.00
    0.002361248 = product of:
      0.009444992 = sum of:
        0.009444992 = product of:
          0.03777997 = sum of:
            0.03777997 = weight(_text_:learning in 948) [ClassicSimilarity], result of:
              0.03777997 = score(doc=948,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.24665193 = fieldWeight in 948, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=948)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Main path analysis is a popular method for extracting the scientific backbone from the citation network of a research domain. Existing approaches ignored the semantic relationships between the citing and cited publications, resulting in several adverse issues, in terms of coherence of main paths and coverage of significant studies. This paper advocated the semantic main path network analysis approach to alleviate these issues based on citation function analysis. A wide variety of SciBERT-based deep learning models were designed for identifying citation functions. Semantic citation networks were built by either including important citations, for example, extension, motivation, usage and similarity, or excluding incidental citations like background and future work. Semantic main path network was built by merging the top-K main paths extracted from various time slices of semantic citation network. In addition, a three-way framework was proposed for the quantitative evaluation of main path analysis results. Both qualitative and quantitative analysis on three research areas of computational linguistics demonstrated that, compared to semantics-agnostic counterparts, different types of semantic main path networks provide complementary views of scientific knowledge flows. Combining them together, we obtained a more precise and comprehensive picture of domain evolution and uncover more coherent development pathways between scientific ideas.
  6. Liu, J.; Zhou, Z.; Gao, M.; Tang, J.; Fan, W.: Aspect sentiment mining of short bullet screen comments from online TV series (2023) 0.00
    0.002361248 = product of:
      0.009444992 = sum of:
        0.009444992 = product of:
          0.03777997 = sum of:
            0.03777997 = weight(_text_:learning in 1018) [ClassicSimilarity], result of:
              0.03777997 = score(doc=1018,freq=2.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.24665193 = fieldWeight in 1018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1018)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Bullet screen comments (BSCs) are user-generated short comments that appear as real-time overlays on many video platforms, expressing the audience opinions and emotions about different aspects of the ongoing video. Unlike traditional long comments after a show, BSCs are often incomplete, ambiguous in context, and correlated over time. Current studies in sentiment analysis of BSCs rarely address these challenges, motivating us to develop an aspect-level sentiment analysis framework. Our framework, BSCNET, is a pre-trained language encoder-based deep neural classifier designed to enhance semantic understanding. A novel neighbor context construction method is proposed to uncover latent contextual correlation among BSCs over time, and we also incorporate semi-supervised learning to reduce labeling costs. The framework increases F1 (Macro) and accuracy by up to 10% and 10.2%, respectively. Additionally, we have developed two novel downstream tasks. The first is noisy BSCs identification, which reached F1 (Macro) and accuracy of 90.1% and 98.3%, respectively, through fine-tuning the BSCNET. The second is the prediction of future episode popularity, where the MAPE is reduced by 11%-19.0% when incorporating sentiment features. Overall, this study provides a methodology reference for aspect-level sentiment analysis of BSCs and highlights its potential for viewing experience or forthcoming content optimization.