Search (6 results, page 1 of 1)

  • × author_ss:"Zhou, L."
  1. Narock, T.; Zhou, L.; Yoon, V.: Semantic similarity of ontology instances using polarity mining (2013) 0.01
    0.011943702 = product of:
      0.055737276 = sum of:
        0.021239832 = weight(_text_:web in 620) [ClassicSimilarity], result of:
          0.021239832 = score(doc=620,freq=2.0), product of:
            0.098177016 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.030083254 = queryNorm
            0.21634221 = fieldWeight in 620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=620)
        0.008691342 = weight(_text_:information in 620) [ClassicSimilarity], result of:
          0.008691342 = score(doc=620,freq=4.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.16457605 = fieldWeight in 620, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=620)
        0.025806103 = weight(_text_:retrieval in 620) [ClassicSimilarity], result of:
          0.025806103 = score(doc=620,freq=4.0), product of:
            0.09099928 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030083254 = queryNorm
            0.2835858 = fieldWeight in 620, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=620)
      0.21428572 = coord(3/14)
    
    Abstract
    Semantic similarity is vital to many areas, such as information retrieval. Various methods have been proposed with a focus on comparing unstructured text documents. Several of these have been enhanced with ontology; however, they have not been applied to ontology instances. With the growth in ontology instance data published online through, for example, Linked Open Data, there is an increasing need to apply semantic similarity to ontology instances. Drawing on ontology-supported polarity mining (OSPM), we propose an algorithm that enhances the computation of semantic similarity with polarity mining techniques. The algorithm is evaluated with online customer review data. The experimental results show that the proposed algorithm outperforms the baseline algorithm in multiple settings.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.2, S.416-427
    Theme
    Semantic Web
    Semantisches Umfeld in Indexierung u. Retrieval
  2. Tao, J.; Zhou, L.; Hickey, K.: Making sense of the black-boxes : toward interpretable text classification using deep learning models (2023) 0.01
    0.0053924047 = product of:
      0.03774683 = sum of:
        0.03262541 = weight(_text_:wide in 990) [ClassicSimilarity], result of:
          0.03262541 = score(doc=990,freq=2.0), product of:
            0.13329163 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.030083254 = queryNorm
            0.24476713 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
        0.005121422 = weight(_text_:information in 990) [ClassicSimilarity], result of:
          0.005121422 = score(doc=990,freq=2.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.09697737 = fieldWeight in 990, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=990)
      0.14285715 = coord(2/14)
    
    Abstract
    Text classification is a common task in data science. Despite the superior performances of deep learning based models in various text classification tasks, their black-box nature poses significant challenges for wide adoption. The knowledge-to-action framework emphasizes several principles concerning the application and use of knowledge, such as ease-of-use, customization, and feedback. With the guidance of the above principles and the properties of interpretable machine learning, we identify the design requirements for and propose an interpretable deep learning (IDeL) based framework for text classification models. IDeL comprises three main components: feature penetration, instance aggregation, and feature perturbation. We evaluate our implementation of the framework with two distinct case studies: fake news detection and social question categorization. The experiment results provide evidence for the efficacy of IDeL components in enhancing the interpretability of text classification models. Moreover, the findings are generalizable across binary and multi-label, multi-class classification problems. The proposed IDeL framework introduce a unique iField perspective for building trusted models in data science by improving the transparency and access to advanced black-box models.
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.6, S.685-700
  3. Zhou, L.; Zhang, D.: NLPIR: a theoretical framework for applying Natural Language Processing to information retrieval (2003) 0.01
    0.0052072546 = product of:
      0.03645078 = sum of:
        0.010644676 = weight(_text_:information in 5148) [ClassicSimilarity], result of:
          0.010644676 = score(doc=5148,freq=6.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.20156369 = fieldWeight in 5148, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
        0.025806103 = weight(_text_:retrieval in 5148) [ClassicSimilarity], result of:
          0.025806103 = score(doc=5148,freq=4.0), product of:
            0.09099928 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.030083254 = queryNorm
            0.2835858 = fieldWeight in 5148, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5148)
      0.14285715 = coord(2/14)
    
    Abstract
    Zhou and Zhang believe that for the potential of natural language processing NLP to be reached in information retrieval a framework for guiding the effort should be in place. They provide a graphic model that identifies different levels of natural language processing effort during the query, document matching process. A direct matching approach uses little NLP, an expansion approach with thesauri, little more, but an extraction approach will often use a variety of NLP techniques, as well as statistical methods. A transformation approach which creates intermediate representations of documents and queries is a step higher in NLP usage, and a uniform approach, which relies on a body of knowledge beyond that of the documents and queries to provide inference and sense making prior to matching would require a maximum NPL effort.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.2, S.115-123
  4. Verma, N.; Fleischmann, K.R.; Zhou, L.; Xie, B.; Lee, M.K.; Rich, K.; Shiroma, K.; Jia, C.; Zimmerman, T.: Trust in COVID-19 public health information (2022) 0.00
    0.0010346835 = product of:
      0.014485569 = sum of:
        0.014485569 = weight(_text_:information in 771) [ClassicSimilarity], result of:
          0.014485569 = score(doc=771,freq=16.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.27429342 = fieldWeight in 771, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=771)
      0.071428575 = coord(1/14)
    
    Abstract
    Understanding the factors that influence trust in public health information is critical for designing successful public health campaigns during pandemics such as COVID-19. We present findings from a cross-sectional survey of 454 US adults-243 older (65+) and 211 younger (18-64) adults-who responded to questionnaires on human values, trust in COVID-19 information sources, attention to information quality, self-efficacy, and factual knowledge about COVID-19. Path analysis showed that trust in direct personal contacts (B = 0.071, p = .04) and attention to information quality (B = 0.251, p < .001) were positively related to self-efficacy for coping with COVID-19. The human value of self-transcendence, which emphasizes valuing others as equals and being concerned with their welfare, had significant positive indirect effects on self-efficacy in coping with COVID-19 (mediated by attention to information quality; effect = 0.049, 95% CI 0.001-0.104) and factual knowledge about COVID-19 (also mediated by attention to information quality; effect = 0.037, 95% CI 0.003-0.089). Our path model offers guidance for fine-tuning strategies for effective public health messaging and serves as a basis for further research to better understand the societal impact of COVID-19 and other public health crises.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.12, S.1776-1792
  5. Zhou, L.; Chaovalit, P.: Ontology-supported polarity mining (2008) 0.00
    8.870564E-4 = product of:
      0.012418789 = sum of:
        0.012418789 = weight(_text_:information in 1343) [ClassicSimilarity], result of:
          0.012418789 = score(doc=1343,freq=6.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.23515764 = fieldWeight in 1343, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1343)
      0.071428575 = coord(1/14)
    
    Abstract
    Polarity mining provides an in-depth analysis of semantic orientations of text information. Motivated by its success in the area of topic mining, we propose an ontology-supported polarity mining (OSPM) approach. The approach aims to enhance polarity mining with ontology by providing detailed topic-specific information. OSPM was evaluated in the movie review domain using both supervised and unsupervised techniques. Results revealed that OSPM outperformed the baseline method without ontology support. The findings of this study not only advance the state of polarity mining research but also shed light on future research directions.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.1, S.98-110
  6. Liu, J.; Wu, Y.; Zhou, L.: ¬A hybrid method for abstracting newspaper articles (1999) 0.00
    8.277468E-4 = product of:
      0.011588455 = sum of:
        0.011588455 = weight(_text_:information in 4059) [ClassicSimilarity], result of:
          0.011588455 = score(doc=4059,freq=4.0), product of:
            0.052810486 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.030083254 = queryNorm
            0.21943474 = fieldWeight in 4059, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=4059)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper introduces a hybrid method for abstracting Chinese text. It integrates the statistical approach with language understanding. Some linguistics heuristics and segmentation are also incorporated into the abstracting process. The prototype system is of a multipurpose type catering for various users with different reqirements. Initial responses show that the proposed method contributes much to the flexibility and accuracy of the automatic Chinese abstracting system. In practice, the present work provides a path to developing an intelligent Chinese system for automating the information
    Source
    Journal of the American Society for Information Science. 50(1999) no.13, S.1234-1245