Search (8 results, page 1 of 1)

  • × author_ss:"Allan, J."
  1. Agosti, M.; Allan, J.: Introduction to the special issue on methods and tools for the automatic construction of hypertext (1997) 0.01
    0.008216376 = product of:
      0.049298257 = sum of:
        0.049298257 = product of:
          0.09859651 = sum of:
            0.09859651 = weight(_text_:methods in 149) [ClassicSimilarity], result of:
              0.09859651 = score(doc=149,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.62818956 = fieldWeight in 149, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.078125 = fieldNorm(doc=149)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Footnote
    Contribution to a special issue on methods and tools for the automatic construction of hypertext
  2. Allan, J.: Building hypertext using information retrieval (1997) 0.01
    0.006573101 = product of:
      0.039438605 = sum of:
        0.039438605 = product of:
          0.07887721 = sum of:
            0.07887721 = weight(_text_:methods in 148) [ClassicSimilarity], result of:
              0.07887721 = score(doc=148,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.5025517 = fieldWeight in 148, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=148)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Presents entirely automatic methods for gathering documents for a hypertext, linking the set, and annotating those connections with a description of the type of the link. Document linking is based upon information retrieval similarity measures with adjustable levels of strictness. Applies an approach inspired by relationship visualization techniques and by graph simplification, to show how to identify automatically tangential, revision, summary, expansion, comparisn, contrast, equivalence, and aggregate links
    Footnote
    Contribution to a special issue on methods and tools for the automatic construction of hypertext
  3. Allan, J.; Croft, W.B.; Callan, J.: ¬The University of Massachusetts and a dozen TRECs (2005) 0.01
    0.0053372756 = product of:
      0.032023653 = sum of:
        0.032023653 = product of:
          0.06404731 = sum of:
            0.06404731 = weight(_text_:29 in 5086) [ClassicSimilarity], result of:
              0.06404731 = score(doc=5086,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.46638384 = fieldWeight in 5086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5086)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.1996 18:16:49
  4. Salton, G.; Buckley, C.; Allan, J.: Automatic structuring of text files (1992) 0.00
    0.0046478845 = product of:
      0.027887305 = sum of:
        0.027887305 = product of:
          0.05577461 = sum of:
            0.05577461 = weight(_text_:methods in 6507) [ClassicSimilarity], result of:
              0.05577461 = score(doc=6507,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.35535768 = fieldWeight in 6507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6507)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In many practical information retrieval situations, it is necessary to process heterogeneous text databases that vary greatly in scope and coverage and deal with many different subjects. In such an environment it is important to provide flexible access to individual text pieces and to structure the collection so that related text elements are identified and properly linked. Describes methods for the automatic structuring of heterogeneous text collections and the construction of browsing tools and access procedures that facilitate collection use. Illustrates these emthods with searches using a large automated encyclopedia
  5. Salton, G.; Allan, J.; Buckley, C.; Singhal, A.: Automatic analysis, theme generation, and summarization of machine readable texts (1994) 0.00
    0.0044477303 = product of:
      0.02668638 = sum of:
        0.02668638 = product of:
          0.05337276 = sum of:
            0.05337276 = weight(_text_:29 in 1949) [ClassicSimilarity], result of:
              0.05337276 = score(doc=1949,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.38865322 = fieldWeight in 1949, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1949)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    16. 8.1998 12:30:29
  6. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.00
    0.0044077197 = product of:
      0.026446318 = sum of:
        0.026446318 = product of:
          0.052892637 = sum of:
            0.052892637 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.052892637 = score(doc=3103,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    27. 2.1999 20:55:22
  7. Papka, R.; Allan, J.: Topic detection and tracking : event clustering as a basis for first story detection (2000) 0.00
    0.0034859132 = product of:
      0.020915478 = sum of:
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 34) [ClassicSimilarity], result of:
              0.041830957 = score(doc=34,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 34, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=34)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Topic Detection and Tracking (TDT) is a new research area that investigates the organization of information by event rather than by subject. In this paper, we provide an overview of the TDT research program from its inception to the third phrase that is now underway. We also discuss our approach to two of the TDT problems in detail. For event clustering (Detection), we show that classic Information Retrieval clustering techniques can be modified slightly to provide effective solutions. For first story detection, we show that similar methods provide satisfactory results, although substantial work remains. In both cases, we explore solutions that model the temporal relationship between news stories. We also investigate the use of phrase extraction to capture the who, what, when, and where contained in news
  8. Dang, E.K.F.; Luk, R.W.P.; Allan, J.: Beyond bag-of-words : bigram-enhanced context-dependent term weights (2014) 0.00
    0.0029049278 = product of:
      0.017429566 = sum of:
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 1283) [ClassicSimilarity], result of:
              0.034859132 = score(doc=1283,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 1283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1283)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    While term independence is a widely held assumption in most of the established information retrieval approaches, it is clearly not true and various works in the past have investigated a relaxation of the assumption. One approach is to use n-grams in document representation instead of unigrams. However, the majority of early works on n-grams obtained only modest performance improvement. On the other hand, the use of information based on supporting terms or "contexts" of queries has been found to be promising. In particular, recent studies showed that using new context-dependent term weights improved the performance of relevance feedback (RF) retrieval compared with using traditional bag-of-words BM25 term weights. Calculation of the new term weights requires an estimation of the local probability of relevance of each query term occurrence. In previous studies, the estimation of this probability was based on unigrams that occur in the neighborhood of a query term. We explore an integration of the n-gram and context approaches by computing context-dependent term weights based on a mixture of unigrams and bigrams. Extensive experiments are performed using the title queries of the Text Retrieval Conference (TREC)-6, TREC-7, TREC-8, and TREC-2005 collections, for RF with relevance judgment of either the top 10 or top 20 documents of an initial retrieval. We identify some crucial elements needed in the use of bigrams in our methods, such as proper inverse document frequency (IDF) weighting of the bigrams and noise reduction by pruning bigrams with large document frequency values. We show that enhancing context-dependent term weights with bigrams is effective in further improving retrieval performance.