Search (3 results, page 1 of 1)

  • × author_ss:"Liu, S."
  • × year_i:[2010 TO 2020}
  1. Wei, F.; Li, W.; Liu, S.: iRANK: a rank-learn-combine framework for unsupervised ensemble ranking (2010) 0.00
    0.0018075579 = product of:
      0.014460463 = sum of:
        0.014460463 = product of:
          0.04338139 = sum of:
            0.04338139 = weight(_text_:problem in 3472) [ClassicSimilarity], result of:
              0.04338139 = score(doc=3472,freq=4.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.33160037 = fieldWeight in 3472, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3472)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    The authors address the problem of unsupervised ensemble ranking. Traditional approaches either combine multiple ranking criteria into a unified representation to obtain an overall ranking score or to utilize certain rank fusion or aggregation techniques to combine the ranking results. Beyond the aforementioned combine-then-rank and rank-then-combine approaches, the authors propose a novel rank-learn-combine ranking framework, called Interactive Ranking (iRANK), which allows two base rankers to teach each other before combination during the ranking process by providing their own ranking results as feedback to the others to boost the ranking performance. This mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. The authors further design two ranking refinement strategies to efficiently and effectively use the feedback based on reasonable assumptions and rational analysis. Although iRANK is applicable to many applications, as a case study, they apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 and 2006 data sets. The results are encouraging with consistent and promising improvements.
  2. Wu, S.; Liu, S.; Wang, Y.; Timmons, T.; Uppili, H.; Bedrick, S.; Hersh, W.; Liu, H,: Intrainstitutional EHR collections for patient-level information retrieval (2017) 0.00
    0.001789391 = product of:
      0.014315128 = sum of:
        0.014315128 = product of:
          0.042945385 = sum of:
            0.042945385 = weight(_text_:problem in 3925) [ClassicSimilarity], result of:
              0.042945385 = score(doc=3925,freq=2.0), product of:
                0.13082431 = queryWeight, product of:
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.030822188 = queryNorm
                0.3282676 = fieldWeight in 3925, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.244485 = idf(docFreq=1723, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3925)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Abstract
    Research in clinical information retrieval has long been stymied by the lack of open resources. However, both clinical information retrieval research innovation and legitimate privacy concerns can be served by the creation of intrainstitutional, fully protected resources. In this article, we provide some principles and tools for information retrieval resource-building in the unique problem setting of patient-level information retrieval, following the tradition of the Cranfield paradigm. We further include an analysis of parallel information retrieval resources at Oregon Health & Science University and Mayo Clinic that were built on these principles.
  3. Liu, S.; Chen, C.: ¬The differences between latent topics in abstracts and citation contexts of citing papers (2013) 0.00
    8.699961E-4 = product of:
      0.0069599687 = sum of:
        0.0069599687 = product of:
          0.020879906 = sum of:
            0.020879906 = weight(_text_:22 in 671) [ClassicSimilarity], result of:
              0.020879906 = score(doc=671,freq=2.0), product of:
                0.10793405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030822188 = queryNorm
                0.19345059 = fieldWeight in 671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=671)
          0.33333334 = coord(1/3)
      0.125 = coord(1/8)
    
    Date
    22. 3.2013 19:50:00