Search (7 results, page 1 of 1)

  • × author_ss:"Kwok, K.L."
  1. Kwok, K.L.: Improving English and Chinese ad-hoc retrieval : a TIPSTER text phase 3 project report (2000) 0.00
    0.00334869 = product of:
      0.00669738 = sum of:
        0.00669738 = product of:
          0.01339476 = sum of:
            0.01339476 = weight(_text_:a in 6388) [ClassicSimilarity], result of:
              0.01339476 = score(doc=6388,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.25222903 = fieldWeight in 6388, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6388)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  2. Kwok, K.L.: ¬A network approach to probabilistic information retrieval (1995) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 5696) [ClassicSimilarity], result of:
              0.010739701 = score(doc=5696,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 5696, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5696)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Shows how probabilistic information retrieval based on document components may be implemented as a feedforward (feedbackward) artificial neural network. The network supports adaptation of connection weights as well as the growing of new edges between queries and terms based on user relevance feedback data for training, and it reflects query modification and expansion in information retrieval. A learning rule is applied that can also be viewed as supporting sequential learning using a harmonic sequence learning rate. Experimental results with 4 standard small collections and a large Wall Street Journal collection show that small query expansion levels of about 30 terms can achieve most of the gains at the low-recall high-precision region, while larger expansion levels continue to provide gains at the high-recall low-precision region of a precision recall curve
    Type
    a
  3. Wu, H.C.; Luk, R.W.P.; Wong, K.F,; Kwok, K.L.: ¬A retrospective study of a hybrid document-context based retrieval model (2007) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 936) [ClassicSimilarity], result of:
              0.009567685 = score(doc=936,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 936, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=936)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes our novel retrieval model that is based on contexts of query terms in documents (i.e., document contexts). Our model is novel because it explicitly takes into account of the document contexts instead of implicitly using the document contexts to find query expansion terms. Our model is based on simulating a user making relevance decisions, and it is a hybrid of various existing effective models and techniques. It estimates the relevance decision preference of a document context as the log-odds and uses smoothing techniques as found in language models to solve the problem of zero probabilities. It combines these estimated preferences of document contexts using different types of aggregation operators that comply with different relevance decision principles (e.g., aggregate relevance principle). Our model is evaluated using retrospective experiments (i.e., with full relevance information), because such experiments can (a) reveal the potential of our model, (b) isolate the problems of the model from those of the parameter estimation, (c) provide information about the major factors affecting the retrieval effectiveness of the model, and (d) show that whether the model obeys the probability ranking principle. Our model is promising as its mean average precision is 60-80% in our experiments using different TREC ad hoc English collections and the NTCIR-5 ad hoc Chinese collection. Our experiments showed that (a) the operators that are consistent with aggregate relevance principle were effective in combining the estimated preferences, and (b) that estimating probabilities using the contexts in the relevant documents can produce better retrieval effectiveness than using the entire relevant documents.
    Type
    a
  4. Kwok, K.L.: ¬The use of titles and cited titles as document representations for automatic classification (1975) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 4347) [ClassicSimilarity], result of:
              0.009471525 = score(doc=4347,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 4347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  5. Kwok, K.L.; Grunfeld, L.: TREC-4 ad-hoc, routing retrieval and filtering experiments using PIRCS (1996) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 7561) [ClassicSimilarity], result of:
              0.008118451 = score(doc=7561,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 7561, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7561)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  6. Kwok, K.L.; Grunfeld, L.: TREC-5 English and Chinese retrieval experiments using PIRCS (1997) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 3102) [ClassicSimilarity], result of:
              0.008118451 = score(doc=3102,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 3102, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Kwok, K.L.: Employing multiple representations for Chinese information retrieval (1999) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 3773) [ClassicSimilarity], result of:
              0.005740611 = score(doc=3773,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 3773, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    For information retrieval in the Chinese language, 3 representation methods for texts are popular, namely: 1-gram or character, bigram, and short-word. Each has its advantages as well as drawbacks. Employing more than one method may combine advantages from them and enhance retrieval effectiveness. We investigated 2 ways of using them simultaneously: mixing representations in documents and queries, and combining retrieval lists obtained via different representations. The experiments were done with the 170 MB evaluated Chinese corpora and 54 long and short queries available from the TREC program and using our Probabilistic Indexing and Retrieval Components System (PIRCS retrieval system). Experiments show that good retrieval need not depend on accurate word segmentation; approximate segmentation into short-words will do. Results also show and confirm that bigram representation alone works well; mixing characters with bigram representation boosts effectiveness further, but it is preferable to mix characters with short-word indexing which is more efficient, needs less resource, and gives better retrieval more often. Cobining retrieval lists from short-word with character representation and from bigram indexing provides the best retrieval results but also at a substabtial cost
    Type
    a