Search (4 results, page 1 of 1)

  • × author_ss:"Li, D."
  1. Li, H.; Wu, H.; Li, D.; Lin, S.; Su, Z.; Luo, X.: PSI: A probabilistic semantic interpretable framework for fine-grained image ranking (2018) 0.03
    0.026838424 = product of:
      0.053676847 = sum of:
        0.053676847 = product of:
          0.107353695 = sum of:
            0.107353695 = weight(_text_:class in 4577) [ClassicSimilarity], result of:
              0.107353695 = score(doc=4577,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.37482765 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Image Ranking is one of the key problems in information science research area. However, most current methods focus on increasing the performance, leaving the semantic gap problem, which refers to the learned ranking models are hard to be understood, remaining intact. Therefore, in this article, we aim at learning an interpretable ranking model to tackle the semantic gap in fine-grained image ranking. We propose to combine attribute-based representation and online passive-aggressive (PA) learning based ranking models to achieve this goal. Besides, considering the highly localized instances in fine-grained image ranking, we introduce a supervised constrained clustering method to gather class-balanced training instances for local PA-based models, and incorporate the learned local models into a unified probabilistic framework. Extensive experiments on the benchmark demonstrate that the proposed framework outperforms state-of-the-art methods in terms of accuracy and speed.
  2. Li, D.; Kwong, C.-P.; Lee, D.L.: Unified linear subspace approach to semantic analysis (2009) 0.02
    0.022365354 = product of:
      0.044730708 = sum of:
        0.044730708 = product of:
          0.089461416 = sum of:
            0.089461416 = weight(_text_:class in 3321) [ClassicSimilarity], result of:
              0.089461416 = score(doc=3321,freq=2.0), product of:
                0.28640816 = queryWeight, product of:
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.05065357 = queryNorm
                0.31235638 = fieldWeight in 3321, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.6542544 = idf(docFreq=420, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3321)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, its retrieval effectiveness is limited because it is based on literal term matching. The Generalized Vector Space Model (GVSM) and Latent Semantic Indexing (LSI) are two prominent semantic retrieval methods, both of which assume there is some underlying latent semantic structure in a dataset that can be used to improve retrieval performance. However, while this structure may be derived from both the term space and the document space, GVSM exploits only the former and LSI the latter. In this article, the latent semantic structure of a dataset is examined from a dual perspective; namely, we consider the term space and the document space simultaneously. This new viewpoint has a natural connection to the notion of kernels. Specifically, a unified kernel function can be derived for a class of vector space models. The dual perspective provides a deeper understanding of the semantic space and makes transparent the geometrical meaning of the unified kernel function. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also prove that the new methods are stable because although the selected rank of the truncated Singular Value Decomposition (SVD) is far from the optimum, the retrieval performance will not be degraded significantly. Experiments performed on standard test collections show that our methods are promising.
  3. Shen, X.; Li, D.; Shen, C.: Evaluating China's university library Web sites using correspondence analysis (2006) 0.01
    0.013725719 = product of:
      0.027451439 = sum of:
        0.027451439 = product of:
          0.054902878 = sum of:
            0.054902878 = weight(_text_:22 in 5277) [ClassicSimilarity], result of:
              0.054902878 = score(doc=5277,freq=2.0), product of:
                0.17738017 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05065357 = queryNorm
                0.30952093 = fieldWeight in 5277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5277)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:40:18
  4. Li, D.: Knowledge representation and discovery based on linguistic atoms (1998) 0.01
    0.010294289 = product of:
      0.020588579 = sum of:
        0.020588579 = product of:
          0.041177157 = sum of:
            0.041177157 = weight(_text_:22 in 3836) [ClassicSimilarity], result of:
              0.041177157 = score(doc=3836,freq=2.0), product of:
                0.17738017 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05065357 = queryNorm
                0.23214069 = fieldWeight in 3836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3836)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Contribution to a special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997