Search (6 results, page 1 of 1)

  • × author_ss:"Tait, J."
  1. Robertson, S.; Tait, J.: In Memoriam Karen Sparck Jones (2007) 0.03
    0.031201486 = product of:
      0.09360445 = sum of:
        0.012701439 = weight(_text_:of in 2927) [ClassicSimilarity], result of:
          0.012701439 = score(doc=2927,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 2927, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=2927)
        0.0490556 = weight(_text_:systems in 2927) [ClassicSimilarity], result of:
          0.0490556 = score(doc=2927,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.4074492 = fieldWeight in 2927, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.09375 = fieldNorm(doc=2927)
        0.031847417 = product of:
          0.063694835 = sum of:
            0.063694835 = weight(_text_:22 in 2927) [ClassicSimilarity], result of:
              0.063694835 = score(doc=2927,freq=2.0), product of:
                0.13719016 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03917671 = queryNorm
                0.46428138 = fieldWeight in 2927, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2927)
          0.5 = coord(1/2)
      0.33333334 = coord(3/9)
    
    Abstract
    This note is also appearing in the Journal of the American Society for Information Systems and Technology.
    Date
    26.12.2007 14:22:47
  2. Tsai, C.-F.; McGarry, K.; Tait, J.: Qualitative evaluation of automatic assignment of keywords to images (2006) 0.01
    0.011395473 = product of:
      0.051279627 = sum of:
        0.015876798 = weight(_text_:of in 963) [ClassicSimilarity], result of:
          0.015876798 = score(doc=963,freq=18.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25915858 = fieldWeight in 963, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=963)
        0.03540283 = weight(_text_:systems in 963) [ClassicSimilarity], result of:
          0.03540283 = score(doc=963,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.29405114 = fieldWeight in 963, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=963)
      0.22222222 = coord(2/9)
    
    Abstract
    In image retrieval, most systems lack user-centred evaluation since they are assessed by some chosen ground truth dataset. The results reported through precision and recall assessed against the ground truth are thought of as being an acceptable surrogate for the judgment of real users. Much current research focuses on automatically assigning keywords to images for enhancing retrieval effectiveness. However, evaluation methods are usually based on system-level assessment, e.g. classification accuracy based on some chosen ground truth dataset. In this paper, we present a qualitative evaluation methodology for automatic image indexing systems. The automatic indexing task is formulated as one of image annotation, or automatic metadata generation for images. The evaluation is composed of two individual methods. First, the automatic indexing annotation results are assessed by human subjects. Second, the subjects are asked to annotate some chosen images as the test set whose annotations are used as ground truth. Then, the system is tested by the test set whose annotation results are judged against the ground truth. Only one of these methods is reported for most systems on which user-centred evaluation are conducted. We believe that both methods need to be considered for full evaluation. We also provide an example evaluation of our system based on this methodology. According to this study, our proposed evaluation methodology is able to provide deeper understanding of the system's performance.
  3. Tait, J.: CALS and its implications for the library and information retrieval communities (1994) 0.01
    0.010392102 = product of:
      0.04676446 = sum of:
        0.018148692 = weight(_text_:of in 1661) [ClassicSimilarity], result of:
          0.018148692 = score(doc=1661,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.29624295 = fieldWeight in 1661, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
        0.028615767 = weight(_text_:systems in 1661) [ClassicSimilarity], result of:
          0.028615767 = score(doc=1661,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 1661, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1661)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper provides a brief introduction to the US Dept. of Defense CALS (Computer-aided Acquisition and Logistics Support) programme and explores the implications it it likely to have for the library and information retrieval communities. CALS includes a well developed set of standards for the electronic representation and delivery of documents containing all sorts of graphics and multi-font texts, and these seem set to dominate the electronic publishing and document delivery market in the very near future
    Source
    Information retrieval: new systems and current research. Proceedings of the 15th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Glasgow, 1993. Ed.: R. Leon
  4. Salampasis, M.; Tait, J.; Bloor, C.: Evaluation of information-seeking performance in hypermedia digital libraries (1998) 0.00
    0.0023284785 = product of:
      0.020956306 = sum of:
        0.020956306 = weight(_text_:of in 3759) [ClassicSimilarity], result of:
          0.020956306 = score(doc=3759,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.34207192 = fieldWeight in 3759, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3759)
      0.11111111 = coord(1/9)
    
    Abstract
    Discusses current information retrieval methods based on recall (R) and precision (P) for evaluating information retrieval and examines their suitability for evaluating the performance of hypermedia digital libraries. Proposes a new quantitative evaluation methodology, based on the structural analysis of hypermedia networks and the navigational and search state patterns of information seekers. Although the proposed methodology retains some of the characteristics of R and P evaluation, it could be more suitable than them for measuring the performance of information-seeking environments where information seekers can utilize arbitrary mixtures of browsing and query-based searching strategies
  5. Liang, S.-F.; Devlin, S.; Tait, J.: Investigating sentence weighting components for automatic summarisation (2007) 0.00
    0.001577849 = product of:
      0.014200641 = sum of:
        0.014200641 = weight(_text_:of in 899) [ClassicSimilarity], result of:
          0.014200641 = score(doc=899,freq=10.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.23179851 = fieldWeight in 899, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=899)
      0.11111111 = coord(1/9)
    
    Abstract
    The work described here initially formed part of a triangulation exercise to establish the effectiveness of the Query Term Order algorithm. It subsequently proved to be a reliable indicator for summarising English web documents. We utilised the human summaries from the Document Understanding Conference data, and generated queries automatically for testing the QTO algorithm. Six sentence weighting schemes that made use of Query Term Frequency and QTO were constructed to produce system summaries, and this paper explains the process of combining and balancing the weighting components. The summaries produced were evaluated by the ROUGE-1 metric, and the results showed that using QTO in a weighting combination resulted in the best performance. We also found that using a combination of more weighting components always produced improved performance compared to any single weighting component.
  6. Robertson, S.; Tait, J.: Karen Sparck Jones (2008) 0.00
    0.0014112709 = product of:
      0.012701439 = sum of:
        0.012701439 = weight(_text_:of in 1596) [ClassicSimilarity], result of:
          0.012701439 = score(doc=1596,freq=2.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.20732689 = fieldWeight in 1596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1596)
      0.11111111 = coord(1/9)
    
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.5, S.852-854