Search (12 results, page 1 of 1)

  • × author_ss:"Sanderson, M."
  1. Petrelli, D.; Levin, S.; Beaulieu, M.; Sanderson, M.: Which user interaction for cross-language information retrieval? : design issues and reflections (2006) 0.03
    0.027245669 = product of:
      0.054491337 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 5053) [ClassicSimilarity], result of:
              0.028250674 = score(doc=5053,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 5053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5053)
          0.25 = coord(1/4)
        0.047428668 = product of:
          0.094857335 = sum of:
            0.094857335 = weight(_text_:assessment in 5053) [ClassicSimilarity], result of:
              0.094857335 = score(doc=5053,freq=2.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.36599535 = fieldWeight in 5053, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5053)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A novel and complex form of information access is cross-language information retrieval: searching for texts written in foreign languages based on native language queries. Although the underlying technology for achieving such a search is relatively well understood, the appropriate interface design is not. The authors present three user evaluations undertaken during the iterative design of Clarity, a cross-language retrieval system for lowdensity languages, and shows how the user-interaction design evolved depending on the results of usability tests. The first test was instrumental to identify weaknesses in both functionalities and interface; the second was run to determine if query translation should be shown or not; the final was a global assessment and focused on user satisfaction criteria. Lessons were learned at every stage of the process leading to a much more informed view of what a cross-language retrieval system should offer to users.
  2. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.006360204 = product of:
      0.025440816 = sum of:
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.05088163 = score(doc=6971,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  3. Aloteibi, S.; Sanderson, M.: Analyzing geographic query reformulation : an exploratory study (2014) 0.00
    0.003975128 = product of:
      0.015900511 = sum of:
        0.015900511 = product of:
          0.031801023 = sum of:
            0.031801023 = weight(_text_:22 in 1177) [ClassicSimilarity], result of:
              0.031801023 = score(doc=1177,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19345059 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26. 1.2014 18:48:22
  4. Purves, R.S.; Sanderson, M.: ¬A methodology to allow avalanche forecasting on an information retrieval system (1998) 0.00
    0.0029132022 = product of:
      0.011652809 = sum of:
        0.011652809 = product of:
          0.046611235 = sum of:
            0.046611235 = weight(_text_:based in 1073) [ClassicSimilarity], result of:
              0.046611235 = score(doc=1073,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.3295462 = fieldWeight in 1073, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1073)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This papers presents adaptations and tests undertaken to allow an information retrieval (IR) system to forecast the likelihood of avalanches on a particular day. The forecasting process uses historical data of the weather and avalanche condiditons for a large number of days. A method for adapting these data into a form usable by a text-based IR system is first described, followed by tests showing the resulting system's accuracy to be equal to existing 'custom built' forecasting systems. From this, it is concluded that the adaptation methodology id effective at allowing such data to be used in a text-based IR system. A number of advantages in using an IR system for avalanche forecasting are also presented
  5. Tann, C.; Sanderson, M.: Are Web-based informational queries changing? (2009) 0.00
    0.0029132022 = product of:
      0.011652809 = sum of:
        0.011652809 = product of:
          0.046611235 = sum of:
            0.046611235 = weight(_text_:based in 2852) [ClassicSimilarity], result of:
              0.046611235 = score(doc=2852,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.3295462 = fieldWeight in 2852, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2852)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This brief communication describes the results of a questionnaire examining certain aspects of the Web-based information seeking practices of university students. The results are contrasted with past work showing that queries to Web search engines can be assigned to one of a series of categories: navigational, informational, and transactional. The survey results suggest that a large group of queries, which in the past would have been classified as informational, have become at least partially navigational. We contend that this change has occurred because of the rise of large Web sites holding particular types of information, such as Wikipedia and the Internet Movie Database.
  6. Sanderson, M.: Revisiting h measured on UK LIS and IR academics (2008) 0.00
    0.0024970302 = product of:
      0.009988121 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 1867) [ClassicSimilarity], result of:
              0.039952483 = score(doc=1867,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 1867, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1867)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    A brief communication appearing in this journal ranked UK-based LIS and (some) IR academics by their h-index using data derived from the Thomson ISI Web of Science(TM) (WoS). In this brief communication, the same academics were re-ranked, using other popular citation databases. It was found that for academics who publish more in computer science forums, their h was significantly different due to highly cited papers missed by WoS; consequently, their rank changed substantially. The study was widened to a broader set of UK-based LIS and IR academics in which results showed similar statistically significant differences. A variant of h, hmx, was introduced that allowed a ranking of the academics using all citation databases together.
  7. Wan-Chik, R.; Clough, P.; Sanderson, M.: Investigating religious information searching through analysis of a search engine log (2013) 0.00
    0.0024970302 = product of:
      0.009988121 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 1129) [ClassicSimilarity], result of:
              0.039952483 = score(doc=1129,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 1129, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1129)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    In this paper we present results from an investigation of religious information searching based on analyzing log files from a large general-purpose search engine. From approximately 15 million queries, we identified 124,422 that were part of 60,759 user sessions. We present a method for categorizing queries based on related terms and show differences in search patterns between religious searches and web searching more generally. We also investigate the search patterns found in queries related to 5 religions: Christianity, Hinduism, Islam, Buddhism, and Judaism. Different search patterns are found to emerge. Results from this study complement existing studies of religious information searching and provide a level of detailed analysis not reported to date. We show, for example, that sessions involving religion-related queries tend to last longer, that the lengths of religion-related queries are greater, and that the number of unique URLs clicked is higher when compared to all queries. The results of the study can serve to provide information on what this large population of users is actually searching for.
  8. Yulianti, E.; Huspi, S.; Sanderson, M.: Tweet-biased summarization (2016) 0.00
    0.0020808585 = product of:
      0.008323434 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 2926) [ClassicSimilarity], result of:
              0.033293735 = score(doc=2926,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 2926, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2926)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    We examined whether the microblog comments given by people after reading a web document could be exploited to improve the accuracy of a web document summarization system. We examined the effect of social information (i.e., tweets) on the accuracy of the generated summaries by comparing the user preference for TBS (tweet-biased summary) with GS (generic summary). The result of crowdsourcing-based evaluation shows that the user preference for TBS was significantly higher than GS. We also took random samples of the documents to see the performance of summaries in a traditional evaluation using ROUGE, which, in general, TBS was also shown to be better than GS. We further analyzed the influence of the number of tweets pointed to a web document on summarization accuracy, finding a positive moderate correlation between the number of tweets pointed to a web document and the performance of generated TBS as measured by user preference. The results show that incorporating social information into the summary generation process can improve the accuracy of summary. The reason for people choosing one summary over another in a crowdsourcing-based evaluation is also presented in this article.
  9. Petrelli, D.; Beaulieu, M.; Sanderson, M.; Demetriou, G.; Herring, P.; Hansen, P.: Observing users, designing clarity : a case study an the user-centered design of a cross-language information retrieval system (2004) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 2506) [ClassicSimilarity], result of:
              0.028250674 = score(doc=2506,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 2506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2506)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This report presents a case study of the development of an interface for a novel and complex form of document retrieval: searching for texts written in foreign languages based on native language queries. Although the underlying technology for achieving such a search is relatively weIl understood, the appropriate interface design is not. A study involving users from the beginning of the design process is described, and it covers initial examination of user needs and tasks, preliminary design and testing of interface components, building, testing, and refining the interface, and, finally, conducting usability tests of the system. Lessons are learned at every stage of the process, leading to a much more informed view of how such an interface should be built.
  10. Clough, P.; Sanderson, M.: User experiments with the Eurovision Cross-Language Image Retrieval System (2006) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 5052) [ClassicSimilarity], result of:
              0.028250674 = score(doc=5052,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 5052, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5052)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    In this article the authors present Eurovision, a textbased system for cross-language (CL) image retrieval. The system is evaluated by multilingual users for two search tasks with the system configured in English and five other languages. To the authors' knowledge, this is the first published set of user experiments for CL image retrieval. They show that (a) it is possible to create a usable multilingual search engine using little knowledge of any language other than English, (b) categorizing images assists the user's search, and (c) there are differences in the way users search between the proposed search tasks. Based on the two search tasks and user feedback, they describe important aspects of any CL image retrieval system.
  11. Aldosari, M.; Sanderson, M.; Tam, A.; Uitdenbogerd, A.L.: Understanding collaborative search for places of interest (2016) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 2840) [ClassicSimilarity], result of:
              0.028250674 = score(doc=2840,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2840)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Finding a place of interest (e.g., a restaurant, hotel, or attraction) is often related to a group information need, however, the actual multiparty collaboration in such searches has not been explored, and little is known about its significance and related practices. We surveyed 100 computer science students and found that 94% (of respondents) searched for places online; 87% had done so as part of a group. Search for place by multiple active participants was experienced by 78%, with group sizes typically being 2 or 3. Search occurred in a range of settings with both desktop PCs and mobile devices. Difficulties were reported with coordinating tasks, sharing results, and making decisions. The results show that finding a place of interest is a quite different group-based search than other multiparty information-seeking activities. The results suggest that local search systems, their interfaces and the devices that access them can be made more usable for collaborative search if they include support for coordination, sharing of results, and decision making.
  12. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    0.0017656671 = product of:
      0.0070626684 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 3788) [ClassicSimilarity], result of:
              0.028250674 = score(doc=3788,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 3788, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.