Search (9 results, page 1 of 1)

  • × author_ss:"Sanderson, M."
  1. Aloteibi, S.; Sanderson, M.: Analyzing geographic query reformulation : an exploratory study (2014) 0.03
    0.030893266 = product of:
      0.077233166 = sum of:
        0.041830003 = weight(_text_:it in 1177) [ClassicSimilarity], result of:
          0.041830003 = score(doc=1177,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27674085 = fieldWeight in 1177, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1177)
        0.035403162 = weight(_text_:22 in 1177) [ClassicSimilarity], result of:
          0.035403162 = score(doc=1177,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.19345059 = fieldWeight in 1177, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1177)
      0.4 = coord(2/5)
    
    Abstract
    Search engine users typically engage in multiquery sessions in their quest to fulfill their information needs. Despite a plethora of research findings suggesting that a significant group of users look for information within a specific geographical scope, existing reformulation studies lack a focused analysis of how users reformulate geographic queries. This study comprehensively investigates the ways in which users reformulate such needs in an attempt to fill this gap in the literature. Reformulated sessions were sampled from a query log of a major search engine to extract 2,400 entries that were manually inspected to filter geo sessions. This filter identified 471 search sessions that included geographical intent, and these sessions were analyzed quantitatively and qualitatively. The results revealed that one in five of the users who reformulated their queries were looking for geographically related information. They reformulated their queries by changing the content of the query rather than the structure. Users were not following a unified sequence of modifications and instead performed a single reformulation action. However, in some cases it was possible to anticipate their next move. A number of tasks in geo modifications were identified, including standard, multi-needs, multi-places, and hybrid approaches. The research concludes that it is important to specialize query reformulation studies to focus on particular query types rather than generically analyzing them, as it is apparent that geographic queries have their special reformulation characteristics.
    Date
    26. 1.2014 18:48:22
  2. Vrettas, G.; Sanderson, M.: Conferences versus journals in computer science (2015) 0.01
    0.011592272 = product of:
      0.057961356 = sum of:
        0.057961356 = weight(_text_:it in 2347) [ClassicSimilarity], result of:
          0.057961356 = score(doc=2347,freq=8.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.38346338 = fieldWeight in 2347, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2347)
      0.2 = coord(1/5)
    
    Abstract
    The question of which type of computer science (CS) publication-conference or journal-is likely to result in more citations for a published paper is addressed. A series of data sets are examined and joined in order to analyze the citations of over 195,000 conference papers and 108,000 journal papers. Two means of evaluating the citations of journals and conferences are explored: h5 and average citations per paper; it was found that h5 has certain biases that make it a difficult measure to use (despite it being the main measure used by Google Scholar). Results from the analysis show that CS, as a discipline, values conferences as a publication venue more highly than any other academic field of study. The analysis also shows that a small number of elite CS conferences have the highest average paper citation rate of any publication type, although overall, citation rates in conferences are no higher than in journals. It is also shown that the length of a paper is correlated with citation rate.
  3. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.011329013 = product of:
      0.05664506 = sum of:
        0.05664506 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
          0.05664506 = score(doc=6971,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.30952093 = fieldWeight in 6971, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=6971)
      0.2 = coord(1/5)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  4. Al-Maskari, A.; Sanderson, M.: ¬A review of factors influencing user satisfaction in information retrieval (2010) 0.01
    0.009563136 = product of:
      0.04781568 = sum of:
        0.04781568 = weight(_text_:it in 3447) [ClassicSimilarity], result of:
          0.04781568 = score(doc=3447,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31634116 = fieldWeight in 3447, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3447)
      0.2 = coord(1/5)
    
    Abstract
    The authors investigate factors influencing user satisfaction in information retrieval. It is evident from this study that user satisfaction is a subjective variable, which can be influenced by several factors such as system effectiveness, user effectiveness, user effort, and user characteristics and expectations. Therefore, information retrieval evaluators should consider all these factors in obtaining user satisfaction and in using it as a criterion of system effectiveness. Previous studies have conflicting conclusions on the relationship between user satisfaction and system effectiveness; this study has substantiated these findings and supports using user satisfaction as a criterion of system effectiveness.
  5. Purves, R.S.; Sanderson, M.: ¬A methodology to allow avalanche forecasting on an information retrieval system (1998) 0.01
    0.006762158 = product of:
      0.03381079 = sum of:
        0.03381079 = weight(_text_:it in 1073) [ClassicSimilarity], result of:
          0.03381079 = score(doc=1073,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.22368698 = fieldWeight in 1073, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1073)
      0.2 = coord(1/5)
    
    Abstract
    This papers presents adaptations and tests undertaken to allow an information retrieval (IR) system to forecast the likelihood of avalanches on a particular day. The forecasting process uses historical data of the weather and avalanche condiditons for a large number of days. A method for adapting these data into a form usable by a text-based IR system is first described, followed by tests showing the resulting system's accuracy to be equal to existing 'custom built' forecasting systems. From this, it is concluded that the adaptation methodology id effective at allowing such data to be used in a text-based IR system. A number of advantages in using an IR system for avalanche forecasting are also presented
  6. Petrelli, D.; Beaulieu, M.; Sanderson, M.; Demetriou, G.; Herring, P.; Hansen, P.: Observing users, designing clarity : a case study an the user-centered design of a cross-language information retrieval system (2004) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 2506) [ClassicSimilarity], result of:
          0.028980678 = score(doc=2506,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 2506, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=2506)
      0.2 = coord(1/5)
    
    Abstract
    This report presents a case study of the development of an interface for a novel and complex form of document retrieval: searching for texts written in foreign languages based on native language queries. Although the underlying technology for achieving such a search is relatively weIl understood, the appropriate interface design is not. A study involving users from the beginning of the design process is described, and it covers initial examination of user needs and tasks, preliminary design and testing of interface components, building, testing, and refining the interface, and, finally, conducting usability tests of the system. Lessons are learned at every stage of the process, leading to a much more informed view of how such an interface should be built.
  7. Clough, P.; Sanderson, M.: User experiments with the Eurovision Cross-Language Image Retrieval System (2006) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 5052) [ClassicSimilarity], result of:
          0.028980678 = score(doc=5052,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 5052, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=5052)
      0.2 = coord(1/5)
    
    Abstract
    In this article the authors present Eurovision, a textbased system for cross-language (CL) image retrieval. The system is evaluated by multilingual users for two search tasks with the system configured in English and five other languages. To the authors' knowledge, this is the first published set of user experiments for CL image retrieval. They show that (a) it is possible to create a usable multilingual search engine using little knowledge of any language other than English, (b) categorizing images assists the user's search, and (c) there are differences in the way users search between the proposed search tasks. Based on the two search tasks and user feedback, they describe important aspects of any CL image retrieval system.
  8. Sanderson, M.: Revisiting h measured on UK LIS and IR academics (2008) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 1867) [ClassicSimilarity], result of:
          0.028980678 = score(doc=1867,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 1867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=1867)
      0.2 = coord(1/5)
    
    Abstract
    A brief communication appearing in this journal ranked UK-based LIS and (some) IR academics by their h-index using data derived from the Thomson ISI Web of Science(TM) (WoS). In this brief communication, the same academics were re-ranked, using other popular citation databases. It was found that for academics who publish more in computer science forums, their h was significantly different due to highly cited papers missed by WoS; consequently, their rank changed substantially. The study was widened to a broader set of UK-based LIS and IR academics in which results showed similar statistically significant differences. A variant of h, hmx, was introduced that allowed a ranking of the academics using all citation databases together.
  9. Lee, W.M.; Sanderson, M.: Analyzing URL queries (2010) 0.01
    0.005796136 = product of:
      0.028980678 = sum of:
        0.028980678 = weight(_text_:it in 4105) [ClassicSimilarity], result of:
          0.028980678 = score(doc=4105,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.19173169 = fieldWeight in 4105, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=4105)
      0.2 = coord(1/5)
    
    Abstract
    This study investigated a relatively unexamined query type, queries composed of URLs. The extent, variation, and user click-through behavior was examined to determine the intent behind URL queries. The study made use of a search log from which URL queries were identified and selected for both qualitative and quantitative analyses. It was found that URL queries accounted for ?17% of the sample. There were statistically significant differences between URL queries and non-URL queries in the following attributes: mean query length; mean number of tokens per query; and mean number of clicks per query. Users issuing such queries clicked on fewer result list items higher up the ranking compared to non-URL queries. Classification indicated that nearly 86% of queries were navigational in intent with informational and transactional queries representing about 7% of URL queries each. This is in contrast to past research that suggested that URL queries were 100% navigational. The conclusions of this study are that URL queries are relatively common and that simply returning the page that matches a user's URL is not an optimal strategy.