Search (14 results, page 1 of 1)

  • × author_ss:"Liu, J."
  1. Zhou, D.; Lawless, S.; Wu, X.; Zhao, W.; Liu, J.: ¬A study of user profile representation for personalized cross-language information retrieval (2016) 0.03
    0.033047434 = product of:
      0.04957115 = sum of:
        0.032667357 = weight(_text_:on in 3167) [ClassicSimilarity], result of:
          0.032667357 = score(doc=3167,freq=12.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29761705 = fieldWeight in 3167, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3167)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 3167) [ClassicSimilarity], result of:
              0.03380758 = score(doc=3167,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 3167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3167)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - With an increase in the amount of multilingual content on the World Wide Web, users are often striving to access information provided in a language of which they are non-native speakers. The purpose of this paper is to present a comprehensive study of user profile representation techniques and investigate their use in personalized cross-language information retrieval (CLIR) systems through the means of personalized query expansion. Design/methodology/approach - The user profiles consist of weighted terms computed by using frequency-based methods such as tf-idf and BM25, as well as various latent semantic models trained on monolingual documents and cross-lingual comparable documents. This paper also proposes an automatic evaluation method for comparing various user profile generation techniques and query expansion methods. Findings - Experimental results suggest that latent semantic-weighted user profile representation techniques are superior to frequency-based methods, and are particularly suitable for users with a sufficient amount of historical data. The study also confirmed that user profiles represented by latent semantic models trained on a cross-lingual level gained better performance than the models trained on a monolingual level. Originality/value - Previous studies on personalized information retrieval systems have primarily investigated user profiles and personalization strategies on a monolingual level. The effect of utilizing such monolingual profiles for personalized CLIR remains unclear. The current study fills the gap by a comprehensive study of user profile representation for personalized CLIR and a novel personalized CLIR evaluation methodology to ensure repeatable and controlled experiments can be conducted.
    Date
    20. 1.2015 18:30:22
  2. Jiang, Y.; Meng, R.; Huang, Y.; Lu, W.; Liu, J.: Generating keyphrases for readers : a controllable keyphrase generation framework (2023) 0.02
    0.023842867 = product of:
      0.0357643 = sum of:
        0.01886051 = weight(_text_:on in 1012) [ClassicSimilarity], result of:
          0.01886051 = score(doc=1012,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 1012, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1012)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 1012) [ClassicSimilarity], result of:
              0.03380758 = score(doc=1012,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 1012, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1012)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    With the wide application of keyphrases in many Information Retrieval (IR) and Natural Language Processing (NLP) tasks, automatic keyphrase prediction has been emerging. However, these statistically important phrases are contributing increasingly less to the related tasks because the end-to-end learning mechanism enables models to learn the important semantic information of the text directly. Similarly, keyphrases are of little help for readers to quickly grasp the paper's main idea because the relationship between the keyphrase and the paper is not explicit to readers. Therefore, we propose to generate keyphrases with specific functions for readers to bridge the semantic gap between them and the information producers, and verify the effectiveness of the keyphrase function for assisting users' comprehension with a user experiment. A controllable keyphrase generation framework (the CKPG) that uses the keyphrase function as a control code to generate categorized keyphrases is proposed and implemented based on Transformer, BART, and T5, respectively. For the Computer Science domain, the Macro-avgs of , , and on the Paper with Code dataset are up to 0.680, 0.535, and 0.558, respectively. Our experimental results indicate the effectiveness of the CKPG models.
    Date
    22. 6.2023 14:55:20
  3. Zhang, Y.; Liu, J.; Song, S.: ¬The design and evaluation of a nudge-based interface to facilitate consumers' evaluation of online health information credibility (2023) 0.02
    0.020160122 = product of:
      0.030240182 = sum of:
        0.013336393 = weight(_text_:on in 993) [ClassicSimilarity], result of:
          0.013336393 = score(doc=993,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 993, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=993)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 993) [ClassicSimilarity], result of:
              0.03380758 = score(doc=993,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 993, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=993)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Evaluating the quality of online health information (OHI) is a major challenge facing consumers. We designed PageGraph, an interface that displays quality indicators and associated values for a webpage, based on credibility evaluation models, the nudge theory, and existing empirical research concerning professionals' and consumers' evaluation of OHI quality. A qualitative evaluation of the interface with 16 participants revealed that PageGraph rendered the information and presentation nudges as intended. It provided the participants with easier access to quality indicators, encouraged fresh angles to assess information credibility, provided an evaluation framework, and encouraged validation of initial judgments. We then conducted a quantitative evaluation of the interface involving 60 participants using a between-subject experimental design. The control group used a regular web browser and evaluated the credibility of 12 preselected webpages, whereas the experimental group evaluated the same webpages with the assistance of PageGraph. PageGraph did not significantly influence participants' evaluation results. The results may be attributed to the insufficiency of the saliency and structure of the nudges implemented and the webpage stimuli's lack of sensitivity to the intervention. Future directions for applying nudges to support OHI evaluation were discussed.
    Date
    22. 6.2023 18:18:34
  4. Zhang, X.; Li, Y.; Liu, J.; Zhang, Y.: Effects of interaction design in digital libraries on user interactions (2008) 0.01
    0.008890929 = product of:
      0.026672786 = sum of:
        0.026672786 = weight(_text_:on in 1898) [ClassicSimilarity], result of:
          0.026672786 = score(doc=1898,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 1898, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1898)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This study aims to investigate the effects of different search and browse features in digital libraries (DLs) on task interactions, and what features would lead to poor user experience. Design/methodology/approach - Three operational DLs: ACM, IEEE CS, and IEEE Xplore are used in this study. These three DLs present different features in their search and browsing designs. Two information-seeking tasks are constructed: one search task and one browsing task. An experiment was conducted in a usability laboratory. Data from 35 participants are collected on a set of measures for user interactions. Findings - The results demonstrate significant differences in many aspects of the user interactions between the three DLs. For both search and browse designs, the features that lead to poor user interactions are identified. Research limitations/implications - User interactions are affected by specific design features in DLs. Some of the design features may lead to poor user performance and should be improved. The study was limited mainly in the variety and the number of tasks used. Originality/value - The study provided empirical evidence to the effects of interaction design features in DLs on user interactions and performance. The results contribute to our knowledge about DL designs in general and about the three operational DLs in particular.
  5. Zhang, X.; Liu, J.; Cole, M.; Belkin, N.: Predicting users' domain knowledge in information retrieval using multiple regression analysis of search behaviors (2015) 0.01
    0.008890929 = product of:
      0.026672786 = sum of:
        0.026672786 = weight(_text_:on in 1822) [ClassicSimilarity], result of:
          0.026672786 = score(doc=1822,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 1822, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1822)
      0.33333334 = coord(1/3)
    
    Abstract
    User domain knowledge affects search behaviors and search success. Predicting a user's knowledge level from implicit evidence such as search behaviors could allow an adaptive information retrieval system to better personalize its interaction with users. This study examines whether user domain knowledge can be predicted from search behaviors by applying a regression modeling analysis method. We identify behavioral features that contribute most to a successful prediction model. A user experiment was conducted with 40 participants searching on task topics in the domain of genomics. Participant domain knowledge level was assessed based on the users' familiarity with and expertise in the search topics and their knowledge of MeSH (Medical Subject Headings) terms in the categories that corresponded to the search topics. Users' search behaviors were captured by logging software, which includes querying behaviors, document selection behaviors, and general task interaction behaviors. Multiple regression analysis was run on the behavioral data using different variable selection methods. Four successful predictive models were identified, each involving a slightly different set of behavioral variables. The models were compared for the best on model fit, significance of the model, and contributions of individual predictors in each model. Each model was validated using the split sampling method. The final model highlights three behavioral variables as domain knowledge level predictors: the number of documents saved, the average query length, and the average ranking position of the documents opened. The results are discussed, study limitations are addressed, and future research directions are suggested.
  6. Liu, J.; Liu, C.; Belkin, N.J.: Predicting information searchers' topic knowledge at different search stages (2016) 0.01
    0.008890929 = product of:
      0.026672786 = sum of:
        0.026672786 = weight(_text_:on in 3147) [ClassicSimilarity], result of:
          0.026672786 = score(doc=3147,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 3147, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3147)
      0.33333334 = coord(1/3)
    
    Abstract
    As a significant contextual factor in information search, topic knowledge has been gaining increased research attention. We report on a study of the relationship between information searchers' topic knowledge and their search behaviors, and on an attempt to predict searchers' topic knowledge from their behaviors during the search. Data were collected in a controlled laboratory experiment with 32 undergraduate journalism student participants, each searching on 4 tasks of different types. In general, behavioral variables were not found to have significant differences between users with high and low levels of topic knowledge, except the mean first dwell time on search result pages. Several models were built to predict topic knowledge using behavioral variables calculated at 3 different stages of search episodes: the first-query-round, the middle point of the search, and the end point. It was found that a model using some search behaviors observed in the first query round led to satisfactory prediction results. The results suggest that early-session search behaviors can be used to predict users' topic knowledge levels, allowing personalization of search for users with different levels of topic knowledge, especially in order to assist users with low topic knowledge.
  7. Liu, J.; Zhao, J.: More than plain text : censorship deletion in the Chinese social media (2021) 0.01
    0.008801571 = product of:
      0.026404712 = sum of:
        0.026404712 = weight(_text_:on in 437) [ClassicSimilarity], result of:
          0.026404712 = score(doc=437,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 437, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=437)
      0.33333334 = coord(1/3)
    
    Abstract
    Although the Internet allows people to circulate messages using different media, most censorship studies discuss the removal of text content. This article presents a systematic study regarding the censorship of both plain text and multimedia content on the Chinese Internet. By analyzing both censored and surviving posts on the Chinese social media platform Weibo during the 2014 Hong Kong Umbrella Movement, we find that multimedia posts suffered more intensive censorship deletion than plain text posts, with censorship programs being oriented more toward multimedia content like images than the text content of multimedia posts. Our analysis has significant implications for censorship studies, information control, and politics in the "post-text" era.
  8. Liu, J.; Belkin, N.J.: Personalizing information retrieval for multi-session tasks : examining the roles of task stage, task type, and topic knowledge on the interpretation of dwell time as an indicator of document usefulness (2015) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 1608) [ClassicSimilarity], result of:
          0.02309931 = score(doc=1608,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 1608, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.33333334 = coord(1/3)
    
    Abstract
    Personalization of information retrieval tailors search towards individual users to meet their particular information needs by taking into account information about users and their contexts, often through implicit sources of evidence such as user behaviors. This study looks at users' dwelling behavior on documents and several contextual factors: the stage of users' work tasks, task type, and users' knowledge of task topics, to explore whether or not taking account contextual factors could help infer document usefulness from dwell time. A controlled laboratory experiment was conducted with 24 participants, each coming 3 times to work on 3 subtasks in a general work task. The results show that task stage could help interpret certain types of dwell time as reliable indicators of document usefulness in certain task types, as was topic knowledge, and the latter played a more significant role when both were available. This study contributes to a better understanding of how dwell time can be used as implicit evidence of document usefulness, as well as how contextual factors can help interpret dwell time as an indicator of usefulness. These findings have both theoretical and practical implications for using behaviors and contextual factors in the development of personalization systems.
  9. Jiang, X.; Liu, J.: Extracting the evolutionary backbone of scientific domains : the semantic main path network analysis approach based on citation context analysis (2023) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 948) [ClassicSimilarity], result of:
          0.02309931 = score(doc=948,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 948, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=948)
      0.33333334 = coord(1/3)
    
    Abstract
    Main path analysis is a popular method for extracting the scientific backbone from the citation network of a research domain. Existing approaches ignored the semantic relationships between the citing and cited publications, resulting in several adverse issues, in terms of coherence of main paths and coverage of significant studies. This paper advocated the semantic main path network analysis approach to alleviate these issues based on citation function analysis. A wide variety of SciBERT-based deep learning models were designed for identifying citation functions. Semantic citation networks were built by either including important citations, for example, extension, motivation, usage and similarity, or excluding incidental citations like background and future work. Semantic main path network was built by merging the top-K main paths extracted from various time slices of semantic citation network. In addition, a three-way framework was proposed for the quantitative evaluation of main path analysis results. Both qualitative and quantitative analysis on three research areas of computational linguistics demonstrated that, compared to semantics-agnostic counterparts, different types of semantic main path networks provide complementary views of scientific knowledge flows. Combining them together, we obtained a more precise and comprehensive picture of domain evolution and uncover more coherent development pathways between scientific ideas.
  10. Liu, J.; Li, Y.; Hastings, S.K.: Simplified scheme of search task difficulty reasons (2019) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 5224) [ClassicSimilarity], result of:
          0.02263261 = score(doc=5224,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 5224, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5224)
      0.33333334 = coord(1/3)
    
    Abstract
    This article reports on a study that aimed at simplifying a search task difficulty reason scheme. Liu, Kim, and Creel (2015) (denoted LKC15) developed a 21-item search task difficulty reason scheme using a controlled laboratory experiment. The current study simplified the scheme through another experiment that followed the same design as LKC15 and involved 32 university students. The study had one added questionnaire item that provided a list of the 21 difficulty reasons in the multiple-choice format. By comparing the current study with LKC15, a concept of primary top difficulty reasons was proposed, which reasonably simplified the 21-item scheme to an 8-item top reason list. This limited number of reasons is more manageable and makes it feasible for search systems to predict task difficulty reasons from observable user behaviors, which builds the basis for systems to improve user satisfaction based on predicted search difficulty reasons.
  11. Liu, J.: CIP in China : the development and status quo (1996) 0.01
    0.0067615155 = product of:
      0.020284547 = sum of:
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 5528) [ClassicSimilarity], result of:
              0.040569093 = score(doc=5528,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 5528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5528)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.1, S.69-76
  12. Liu, J.; Liu, C.: Personalization in text information retrieval : a survey (2020) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 5761) [ClassicSimilarity], result of:
          0.016003672 = score(doc=5761,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 5761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5761)
      0.33333334 = coord(1/3)
    
    Abstract
    Personalization of information retrieval (PIR) is aimed at tailoring a search toward individual users and user groups by taking account of additional information about users besides their queries. In the past two decades or so, PIR has received extensive attention in both academia and industry. This article surveys the literature of personalization in text retrieval, following a framework for aspects or factors that can be used for personalization. The framework consists of additional information about users that can be explicitly obtained by asking users for their preferences, or implicitly inferred from users' search behaviors. Users' characteristics and contextual factors such as tasks, time, location, etc., can be helpful for personalization. This article also addresses various issues including when to personalize, the evaluation of PIR, privacy, usability, etc. Based on the extensive review, challenges are discussed and directions for future effort are suggested.
  13. Liu, J.; Zhang, X.: ¬The role of domain knowledge in document selection from search results (2019) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 5410) [ClassicSimilarity], result of:
          0.013336393 = score(doc=5410,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 5410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5410)
      0.33333334 = coord(1/3)
    
    Abstract
    It is a frequently seen scenario that when people are not familiar with their search topics, they use a simple keyword search, which leads to a large amount of search results in multiple pages. This makes it difficult for users to pick relevant documents, especially given that they are not knowledgeable of the topics. To explore how systems can better help users find relevant documents from search results, the current research analyzed document selection behaviors of users with different levels of domain knowledge (DK). Data were collected in a laboratory study with 35 participants each searching on four tasks in the genomics domain. The results show that users with high and low DK levels selected different sets of documents to view; those high in DK read more documents and gave higher relevance ratings for the viewed documents than those low in DK did. Users with low DK tended to select documents ranking toward the top of the search result lists, and those with high in DK tended to also select documents ranking down the search result lists. The findings help design search systems that can personalize search results to users with different levels of DK.
  14. Liu, J.; Zhou, Z.; Gao, M.; Tang, J.; Fan, W.: Aspect sentiment mining of short bullet screen comments from online TV series (2023) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 1018) [ClassicSimilarity], result of:
          0.013336393 = score(doc=1018,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 1018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1018)
      0.33333334 = coord(1/3)
    
    Abstract
    Bullet screen comments (BSCs) are user-generated short comments that appear as real-time overlays on many video platforms, expressing the audience opinions and emotions about different aspects of the ongoing video. Unlike traditional long comments after a show, BSCs are often incomplete, ambiguous in context, and correlated over time. Current studies in sentiment analysis of BSCs rarely address these challenges, motivating us to develop an aspect-level sentiment analysis framework. Our framework, BSCNET, is a pre-trained language encoder-based deep neural classifier designed to enhance semantic understanding. A novel neighbor context construction method is proposed to uncover latent contextual correlation among BSCs over time, and we also incorporate semi-supervised learning to reduce labeling costs. The framework increases F1 (Macro) and accuracy by up to 10% and 10.2%, respectively. Additionally, we have developed two novel downstream tasks. The first is noisy BSCs identification, which reached F1 (Macro) and accuracy of 90.1% and 98.3%, respectively, through fine-tuning the BSCNET. The second is the prediction of future episode popularity, where the MAPE is reduced by 11%-19.0% when incorporating sentiment features. Overall, this study provides a methodology reference for aspect-level sentiment analysis of BSCs and highlights its potential for viewing experience or forthcoming content optimization.