Search (8 results, page 1 of 1)

  • × author_ss:"Li, M."
  1. Wenyin, L.; Chen, Z.; Li, M.; Zhang, H.: ¬A media agent for automatically builiding a personalized semantic index of Web media objects (2001) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 6522) [ClassicSimilarity], result of:
              0.00994303 = score(doc=6522,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 6522, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6522)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A novel idea of media agent is briefly presented, which can automatically build a personalized semantic index of Web media objects for each particular user. Because the Web is a rich source of multimedia data and the text content on the Web pages is usually semantically related to those media objects on the same pages, the media agent can automatically collect the URLs and related text, and then build the index of the multimedia data, on behalf of the user whenever and wherever she accesses these multimedia data or their container Web pages. Moreover, the media agent can also use an off-line crawler to build the index for those multimedia objects that are relevant to the user's favorites but have not accessed by the user yet. When the user wants to find these multimedia data once again, the semantic index facilitates text-based search for her.
    Type
    a
  2. Bu, Y.; Li, M.; Gu, W.; Huang, W.-b.: Topic diversity : a discipline scheme-free diversity measurement for journals (2021) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 209) [ClassicSimilarity], result of:
              0.009471525 = score(doc=209,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 209, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=209)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scientometrics has many citation-based measurements for characterizing diversity, but most of these measurements depend on human-designed categories and the granularity of discipline classifications sometimes does not allow in-depth analysis. As such, the current paper proposes a new measurement for quantifying journals' diversity by utilizing the abstracts of scientific publications in journals, namely topic diversity (TD). Specifically, we apply a topic detection method to extract fine-grained topics, rather than disciplines, in journals and adapt certain diversity indicators to calculate TD. Since TD only needs as inputs abstracts of publications rather than citing relationships between publications, this measurement has the potential to be widely used in scientometrics.
    Type
    a
  3. Li, M.; Li, H.; Zhou, Z.-H.: Semi-supervised document retrieval (2009) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 4218) [ClassicSimilarity], result of:
              0.00894975 = score(doc=4218,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 4218, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper proposes a new machine learning method for constructing ranking models in document retrieval. The method, which is referred to as SSRank, aims to use the advantages of both the traditional Information Retrieval (IR) methods and the supervised learning methods for IR proposed recently. The advantages include the use of limited amount of labeled data and rich model representation. To do so, the method adopts a semi-supervised learning framework in ranking model construction. Specifically, given a small number of labeled documents with respect to some queries, the method effectively labels the unlabeled documents for the queries. It then uses all the labeled data to train a machine learning model (in our case, Neural Network). In the data labeling, the method also makes use of a traditional IR model (in our case, BM25). A stopping criterion based on machine learning theory is given for the data labeling process. Experimental results on three benchmark datasets and one web search dataset indicate that SSRank consistently and almost always significantly outperforms the baseline methods (unsupervised and supervised learning methods), given the same amount of labeled data. This is because SSRank can effectively leverage the use of unlabeled data in learning.
    Type
    a
  4. Bennett, C.H.; Li, M.; Ma, B.: ¬Die Evolution der Kettenbriefe (2004) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 2418) [ClassicSimilarity], result of:
              0.008118451 = score(doc=2418,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 2418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2418)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  5. Liu, X.; Bu, Y.; Li, M.; Li, J.: Monodisciplinary collaboration disrupts science more than multidisciplinary collaboration (2024) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 1202) [ClassicSimilarity], result of:
              0.008118451 = score(doc=1202,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 1202, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Collaboration across disciplines is a critical form of scientific collaboration to solve complex problems and make innovative contributions. This study focuses on the association between multidisciplinary collaboration measured by coauthorship in publications and the disruption of publications measured by the Disruption (D) index. We used authors' affiliations as a proxy of the disciplines to which they belong and categorized an article into multidisciplinary collaboration or monodisciplinary collaboration. The D index quantifies the extent to which a study disrupts its predecessors. We selected 13 journals that publish articles in six disciplines from the Microsoft Academic Graph (MAG) database and then constructed regression models with fixed effects and estimated the relationship between the variables. The findings show that articles with monodisciplinary collaboration are more disruptive than those with multidisciplinary collaboration. Furthermore, we uncovered the mechanism of how monodisciplinary collaboration disrupts science more than multidisciplinary collaboration by exploring the references of the sampled publications.
    Type
    a
  6. Kirchherr, W.; Li, M.; Vitányi, P.: ¬The miraculous universal distribution (1997) 0.00
    0.001913537 = product of:
      0.003827074 = sum of:
        0.003827074 = product of:
          0.007654148 = sum of:
            0.007654148 = weight(_text_:a in 914) [ClassicSimilarity], result of:
              0.007654148 = score(doc=914,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14413087 = fieldWeight in 914, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=914)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    What is it, exactly, that scientists do? How, exactly, do they do it? How is a scientific hypothesis formulated? How does one choose one hypothesis over another? Scientists engage in what is usually called 'inductive reasoning'. Inductive reasoning entails making predictions about future behavior based on past observations. However, defining the proper method of formulating such predictions has occupied philosophers through the ages
    Type
    a
  7. Gu, D.; Liu, H.; Zhao, H.; Yang, X.; Li, M.; Lian, C.: ¬A deep learning and clustering-based topic consistency modeling framework for matching health information supply and demand (2024) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 1209) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=1209,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 1209, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1209)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Improving health literacy through health information dissemination is one of the most economical and effective mechanisms for improving population health. This process needs to fully accommodate the thematic suitability of health information supply and demand and reduce the impact of information overload and supply-demand mismatch on the enthusiasm of health information acquisition. We propose a health information topic modeling analysis framework that integrates deep learning methods and clustering techniques to model the supply-side and demand-side topics of health information and to quantify the thematic alignment of supply and demand. To validate the effectiveness of the framework, we have conducted an empirical analysis on a dataset with 90,418 pieces of textual data from two prominent social networking platforms. The results show that the supply of health information in general has not yet met the demand, the demand for health information has not yet been met to a considerable extent, especially for disease-related topics, and there is clear inconsistency between the supply and demand sides for the same health topics. Public health policy-making departments and content producers can adjust their information selection and dissemination strategies according to the distribution of identified health topics, thereby improving the effectiveness of public health information dissemination.
    Type
    a
  8. Chen, Z.; Wenyin, L.; Zhang, F.; Li, M.; Zhang, H.: Web mining for Web image retrieval (2001) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 6521) [ClassicSimilarity], result of:
              0.005858987 = score(doc=6521,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 6521, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The popularity of digital images is rapidly increasing due to improving digital imaging technologies and convenient availability facilitated by the Internet. However, how to find user-intended images from the Internet is nontrivial. The main reason is that the Web images are usually not annotated using semantic descriptors. In this article, we present an effective approach to and a prototype system for image retrieval from the Internet using Web mining. The system can also serve as a Web image search engine. One of the key ideas in the approach is to extract the text information on the Web pages to semantically describe the images. The text description is then combined with other low-level image features in the image similarity assessment. Another main contribution of this work is that we apply data mining on the log of users' feedback to improve image retrieval performance in three aspects. First, the accuracy of the document space model of image representation obtained from the Web pages is improved by removing clutter and irrelevant text information. Second, to construct the user space model of users' representation of images, which is then combined with the document space model to eliminate mismatch between the page author's expression and the user's understanding and expectation. Third, to discover the relationship between low-level and high-level features, which is extremely useful for assigning the low-level features' weights in similarity assessment
    Type
    a