Search (5 results, page 1 of 1)

  • × author_ss:"Li, G."
  1. Xu, Y.; Li, G.; Mou, L.; Lu, Y.: Learning non-taxonomic relations on demand for ontology extension (2014) 0.04
    0.038012084 = product of:
      0.07602417 = sum of:
        0.07602417 = product of:
          0.15204833 = sum of:
            0.15204833 = weight(_text_:learning in 2961) [ClassicSimilarity], result of:
              0.15204833 = score(doc=2961,freq=10.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.6618366 = fieldWeight in 2961, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2961)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Learning non-taxonomic relations becomes an important research topic in ontology extension. Most of the existing learning approaches are mainly based on expert crafted corpora. These approaches are normally domain-specific and the corpora acquisition is laborious and costly. On the other hand, based on the static corpora, it is not able to meet personalized needs of semantic relations discovery for various taxonomies. In this paper, we propose a novel approach for learning non-taxonomic relations on demand. For any supplied taxonomy, it can focus on the segment of the taxonomy and collect information dynamically about the taxonomic concepts by using Wikipedia as a learning source. Based on the newly generated corpus, non-taxonomic relations are acquired through three steps: a) semantic relatedness detection; b) relations extraction between concepts; and c) relations generalization within a hierarchy. The proposed approach is evaluated on three different predefined taxonomies and the experimental results show that it is effective in capturing non-taxonomic relations as needed and has good potential for the ontology extension on demand.
  2. Lu, K.; Mao, J.; Li, G.: Toward effective automated weighted subject indexing : a comparison of different approaches in different environments (2018) 0.01
    0.014166266 = product of:
      0.028332531 = sum of:
        0.028332531 = product of:
          0.056665063 = sum of:
            0.056665063 = weight(_text_:learning in 4292) [ClassicSimilarity], result of:
              0.056665063 = score(doc=4292,freq=2.0), product of:
                0.22973695 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.05145426 = queryNorm
                0.24665193 = fieldWeight in 4292, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4292)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Subject indexing plays an important role in supporting subject access to information resources. Current subject indexing systems do not make adequate distinctions on the importance of assigned subject descriptors. Assigning numeric weights to subject descriptors to distinguish their importance to the documents can strengthen the role of subject metadata. Automated methods are more cost-effective. This study compares different automated weighting methods in different environments. Two evaluation methods were used to assess the performance. Experiments on three datasets in the biomedical domain suggest the performance of different weighting methods depends on whether it is an abstract or full text environment. Mutual information with bag-of-words representation shows the best average performance in the full text environment, while cosine with bag-of-words representation is the best in an abstract environment. The cosine measure has relatively consistent and robust performance. A direct weighting method, IDF (Inverse Document Frequency), can produce quick and reasonable estimates of the weights. Bag-of-words representation generally outperforms the concept-based representation. Further improvement in performance can be obtained by using the learning-to-rank method to integrate different weighting methods. This study follows up Lu and Mao (Journal of the Association for Information Science and Technology, 66, 1776-1784, 2015), in which an automated weighted subject indexing method was proposed and validated. The findings from this study contribute to more effective weighted subject indexing.
  3. Li, G.; Siddharth, L.; Luo, J.: Embedding knowledge graph of patent metadata to measure knowledge proximity (2023) 0.01
    0.010457013 = product of:
      0.020914026 = sum of:
        0.020914026 = product of:
          0.04182805 = sum of:
            0.04182805 = weight(_text_:22 in 920) [ClassicSimilarity], result of:
              0.04182805 = score(doc=920,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.23214069 = fieldWeight in 920, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2023 12:06:55
  4. Wang, S.; Ma, Y.; Mao, J.; Bai, Y.; Liang, Z.; Li, G.: Quantifying scientific breakthroughs by a novel disruption indicator based on knowledge entities : On the rise of scrape-and-report scholarship in online reviews research (2023) 0.01
    0.008714178 = product of:
      0.017428355 = sum of:
        0.017428355 = product of:
          0.03485671 = sum of:
            0.03485671 = weight(_text_:22 in 882) [ClassicSimilarity], result of:
              0.03485671 = score(doc=882,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.19345059 = fieldWeight in 882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=882)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2023 18:37:33
  5. Yu, C.; Xue, H.; An, L.; Li, G.: ¬A lightweight semantic-enhanced interactive network for efficient short-text matching (2023) 0.01
    0.008714178 = product of:
      0.017428355 = sum of:
        0.017428355 = product of:
          0.03485671 = sum of:
            0.03485671 = weight(_text_:22 in 890) [ClassicSimilarity], result of:
              0.03485671 = score(doc=890,freq=2.0), product of:
                0.18018405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05145426 = queryNorm
                0.19345059 = fieldWeight in 890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2023 19:05:27