Search (14 results, page 1 of 1)

  • × author_ss:"Sanderson, M."
  1. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.008956701 = product of:
      0.031348452 = sum of:
        0.011718053 = product of:
          0.058590267 = sum of:
            0.058590267 = weight(_text_:retrieval in 6971) [ClassicSimilarity], result of:
              0.058590267 = score(doc=6971,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5347345 = fieldWeight in 6971, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.2 = coord(1/5)
        0.0196304 = product of:
          0.0392608 = sum of:
            0.0392608 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.0392608 = score(doc=6971,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Describes the Reuters test collection, which at 22.173 references is significantly larger than most traditional test collections. In addition, Reuters has none of the recall calculation problems normally associated with some of the larger test collections available. Explains the method derived by D.D. Lewis to perform retrieval experiments on the Reuters collection and illustrates the use of the Reuters collection using some simple retrieval experiments that compare the performance of stemming algorithms
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  2. Al-Maskari, A.; Sanderson, M.: ¬A review of factors influencing user satisfaction in information retrieval (2010) 0.01
    0.005712935 = product of:
      0.03999054 = sum of:
        0.03999054 = product of:
          0.099976346 = sum of:
            0.044398077 = weight(_text_:retrieval in 3447) [ClassicSimilarity], result of:
              0.044398077 = score(doc=3447,freq=6.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40520695 = fieldWeight in 3447, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3447)
            0.055578265 = weight(_text_:system in 3447) [ClassicSimilarity], result of:
              0.055578265 = score(doc=3447,freq=8.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.4871716 = fieldWeight in 3447, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3447)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    The authors investigate factors influencing user satisfaction in information retrieval. It is evident from this study that user satisfaction is a subjective variable, which can be influenced by several factors such as system effectiveness, user effectiveness, user effort, and user characteristics and expectations. Therefore, information retrieval evaluators should consider all these factors in obtaining user satisfaction and in using it as a criterion of system effectiveness. Previous studies have conflicting conclusions on the relationship between user satisfaction and system effectiveness; this study has substantiated these findings and supports using user satisfaction as a criterion of system effectiveness.
  3. Purves, R.S.; Sanderson, M.: ¬A methodology to allow avalanche forecasting on an information retrieval system (1998) 0.01
    0.005622244 = product of:
      0.039355706 = sum of:
        0.039355706 = product of:
          0.09838927 = sum of:
            0.036250874 = weight(_text_:retrieval in 1073) [ClassicSimilarity], result of:
              0.036250874 = score(doc=1073,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33085006 = fieldWeight in 1073, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1073)
            0.06213839 = weight(_text_:system in 1073) [ClassicSimilarity], result of:
              0.06213839 = score(doc=1073,freq=10.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.5446744 = fieldWeight in 1073, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1073)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This papers presents adaptations and tests undertaken to allow an information retrieval (IR) system to forecast the likelihood of avalanches on a particular day. The forecasting process uses historical data of the weather and avalanche condiditons for a large number of days. A method for adapting these data into a form usable by a text-based IR system is first described, followed by tests showing the resulting system's accuracy to be equal to existing 'custom built' forecasting systems. From this, it is concluded that the adaptation methodology id effective at allowing such data to be used in a text-based IR system. A number of advantages in using an IR system for avalanche forecasting are also presented
  4. Clough, P.; Sanderson, M.: User experiments with the Eurovision Cross-Language Image Retrieval System (2006) 0.01
    0.0055545247 = product of:
      0.03888167 = sum of:
        0.03888167 = product of:
          0.09720418 = sum of:
            0.0439427 = weight(_text_:retrieval in 5052) [ClassicSimilarity], result of:
              0.0439427 = score(doc=5052,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 5052, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5052)
            0.053261478 = weight(_text_:system in 5052) [ClassicSimilarity], result of:
              0.053261478 = score(doc=5052,freq=10.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.46686378 = fieldWeight in 5052, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5052)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article the authors present Eurovision, a textbased system for cross-language (CL) image retrieval. The system is evaluated by multilingual users for two search tasks with the system configured in English and five other languages. To the authors' knowledge, this is the first published set of user experiments for CL image retrieval. They show that (a) it is possible to create a usable multilingual search engine using little knowledge of any language other than English, (b) categorizing images assists the user's search, and (c) there are differences in the way users search between the proposed search tasks. Based on the two search tasks and user feedback, they describe important aspects of any CL image retrieval system.
  5. Petrelli, D.; Levin, S.; Beaulieu, M.; Sanderson, M.: Which user interaction for cross-language information retrieval? : design issues and reflections (2006) 0.00
    0.004435898 = product of:
      0.031051284 = sum of:
        0.031051284 = product of:
          0.07762821 = sum of:
            0.0439427 = weight(_text_:retrieval in 5053) [ClassicSimilarity], result of:
              0.0439427 = score(doc=5053,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 5053, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5053)
            0.033685513 = weight(_text_:system in 5053) [ClassicSimilarity], result of:
              0.033685513 = score(doc=5053,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.29527056 = fieldWeight in 5053, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5053)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    A novel and complex form of information access is cross-language information retrieval: searching for texts written in foreign languages based on native language queries. Although the underlying technology for achieving such a search is relatively well understood, the appropriate interface design is not. The authors present three user evaluations undertaken during the iterative design of Clarity, a cross-language retrieval system for lowdensity languages, and shows how the user-interaction design evolved depending on the results of usability tests. The first test was instrumental to identify weaknesses in both functionalities and interface; the second was run to determine if query translation should be shown or not; the final was a global assessment and focused on user satisfaction criteria. Lessons were learned at every stage of the process leading to a much more informed view of what a cross-language retrieval system should offer to users.
  6. Petrelli, D.; Beaulieu, M.; Sanderson, M.; Demetriou, G.; Herring, P.; Hansen, P.: Observing users, designing clarity : a case study an the user-centered design of a cross-language information retrieval system (2004) 0.00
    0.0037004396 = product of:
      0.025903076 = sum of:
        0.025903076 = product of:
          0.06475769 = sum of:
            0.03107218 = weight(_text_:retrieval in 2506) [ClassicSimilarity], result of:
              0.03107218 = score(doc=2506,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.2835858 = fieldWeight in 2506, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2506)
            0.033685513 = weight(_text_:system in 2506) [ClassicSimilarity], result of:
              0.033685513 = score(doc=2506,freq=4.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.29527056 = fieldWeight in 2506, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2506)
          0.4 = coord(2/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This report presents a case study of the development of an interface for a novel and complex form of document retrieval: searching for texts written in foreign languages based on native language queries. Although the underlying technology for achieving such a search is relatively weIl understood, the appropriate interface design is not. A study involving users from the beginning of the design process is described, and it covers initial examination of user needs and tasks, preliminary design and testing of interface components, building, testing, and refining the interface, and, finally, conducting usability tests of the system. Lessons are learned at every stage of the process, leading to a much more informed view of how such an interface should be built.
  7. Aloteibi, S.; Sanderson, M.: Analyzing geographic query reformulation : an exploratory study (2014) 0.00
    0.0017527144 = product of:
      0.0122690005 = sum of:
        0.0122690005 = product of:
          0.024538001 = sum of:
            0.024538001 = weight(_text_:22 in 1177) [ClassicSimilarity], result of:
              0.024538001 = score(doc=1177,freq=2.0), product of:
                0.12684377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03622214 = queryNorm
                0.19345059 = fieldWeight in 1177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    26. 1.2014 18:48:22
  8. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.00
    0.0014796278 = product of:
      0.010357394 = sum of:
        0.010357394 = product of:
          0.051786967 = sum of:
            0.051786967 = weight(_text_:retrieval in 7522) [ClassicSimilarity], result of:
              0.051786967 = score(doc=7522,freq=4.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.47264296 = fieldWeight in 7522, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7522)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  9. Sanderson, M.; Ruthven, I.: Report on the Glasgow IR group (glair4) submission (1997) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 3088) [ClassicSimilarity], result of:
              0.0439427 = score(doc=3088,freq=2.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 3088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3088)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  10. Sanderson, M.; Lawrie, D.: Building, testing, and applying concept hierarchies (2000) 0.00
    0.0012555057 = product of:
      0.00878854 = sum of:
        0.00878854 = product of:
          0.0439427 = sum of:
            0.0439427 = weight(_text_:retrieval in 37) [ClassicSimilarity], result of:
              0.0439427 = score(doc=37,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.40105087 = fieldWeight in 37, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=37)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  11. Bergman, O.; Whittaker, S.; Sanderson, M.; Nachmias, R.; Ramamoorthy, A.: ¬The effect of folder structure on personal file navigation (2010) 0.00
    0.0010462549 = product of:
      0.007323784 = sum of:
        0.007323784 = product of:
          0.03661892 = sum of:
            0.03661892 = weight(_text_:retrieval in 4114) [ClassicSimilarity], result of:
              0.03661892 = score(doc=4114,freq=8.0), product of:
                0.109568894 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03622214 = queryNorm
                0.33420905 = fieldWeight in 4114, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4114)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    Folder navigation is the main way that personal computer users retrieve their own files. People dedicate considerable time to creating systematic structures to facilitate such retrieval. Despite the prevalence of both manual organization and navigation, there is very little systematic data about how people actually carry out navigation, or about the relation between organization structure and retrieval parameters. The aims of our research were therefore to study users' folder structure, personal file navigation, and the relations between them. We asked 296 participants to retrieve 1,131 of their active files and analyzed each of the 5,035 navigation steps in these retrievals. Folder structures were found to be shallow (files were retrieved from mean depth of 2.86 folders), with small folders (a mean of 11.82 files per folder) containing many subfolders (M=10.64). Navigation was largely successful and efficient with participants successfully accessing 94% of their files and taking 14.76 seconds to do this on average. Retrieval time and success depended on folder size and depth. We therefore found the users' decision to avoid both deep structure and large folders to be adaptive. Finally, we used a predictive model to formulate the effect of folder depth and folder size on retrieval time, and suggested an optimization point in this trade-off.
  12. Tavakoli, L.; Zamani, H.; Scholer, F.; Croft, W.B.; Sanderson, M.: Analyzing clarification in asynchronous information-seeking conversations (2022) 0.00
    6.8055023E-4 = product of:
      0.0047638514 = sum of:
        0.0047638514 = product of:
          0.023819257 = sum of:
            0.023819257 = weight(_text_:system in 496) [ClassicSimilarity], result of:
              0.023819257 = score(doc=496,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.20878783 = fieldWeight in 496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.046875 = fieldNorm(doc=496)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    This research analyzes human-generated clarification questions to provide insights into how they are used to disambiguate and provide a better understanding of information needs. A set of clarification questions is extracted from posts on the Stack Exchange platform. Novel taxonomy is defined for the annotation of the questions and their responses. We investigate the clarification questions in terms of whether they add any information to the post (the initial question posted by the asker) and the accepted answer, which is the answer chosen by the asker. After identifying, which clarification questions are more useful, we investigated the characteristics of these questions in terms of their types and patterns. Non-useful clarification questions are identified, and their patterns are compared with useful clarifications. Our analysis indicates that the most useful clarification questions have similar patterns, regardless of topic. This research contributes to an understanding of clarification in conversations and can provide insight for clarification dialogues in conversational search scenarios and for the possible system generation of clarification requests in information-seeking conversations.
  13. Yulianti, E.; Huspi, S.; Sanderson, M.: Tweet-biased summarization (2016) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 2926) [ClassicSimilarity], result of:
              0.01984938 = score(doc=2926,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 2926, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2926)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    We examined whether the microblog comments given by people after reading a web document could be exploited to improve the accuracy of a web document summarization system. We examined the effect of social information (i.e., tweets) on the accuracy of the generated summaries by comparing the user preference for TBS (tweet-biased summary) with GS (generic summary). The result of crowdsourcing-based evaluation shows that the user preference for TBS was significantly higher than GS. We also took random samples of the documents to see the performance of summaries in a traditional evaluation using ROUGE, which, in general, TBS was also shown to be better than GS. We further analyzed the influence of the number of tweets pointed to a web document on summarization accuracy, finding a positive moderate correlation between the number of tweets pointed to a web document and the performance of generated TBS as measured by user preference. The results show that incorporating social information into the summary generation process can improve the accuracy of summary. The reason for people choosing one summary over another in a crowdsourcing-based evaluation is also presented in this article.
  14. Ren, Y.; Tomko, M.; Salim, F.D.; Ong, K.; Sanderson, M.: Analyzing Web behavior in indoor retail spaces (2017) 0.00
    5.6712516E-4 = product of:
      0.003969876 = sum of:
        0.003969876 = product of:
          0.01984938 = sum of:
            0.01984938 = weight(_text_:system in 3318) [ClassicSimilarity], result of:
              0.01984938 = score(doc=3318,freq=2.0), product of:
                0.11408355 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.03622214 = queryNorm
                0.17398985 = fieldWeight in 3318, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3318)
          0.2 = coord(1/5)
      0.14285715 = coord(1/7)
    
    Abstract
    We analyze 18- million rows of Wi-Fi access logs collected over a 1-year period from over 120,000 anonymized users at an inner city shopping mall. The anonymized data set gathered from an opt-in system provides users' approximate physical location as well as web browsing and some search history. Such data provide a unique opportunity to analyze the interaction between people's behavior in physical retail spaces and their web behavior, serving as a proxy to their information needs. We found that (a) there is a weekly periodicity in users' visits to the mall; (b) people tend to visit similar mall locations and web content during their repeated visits to the mall; (c) around 60% of registered Wi-Fi users actively browse the web, and around 10% of them use Wi-Fi for accessing web search engines; (d) people are likely to spend a relatively constant amount of time browsing the web while the duration of their visit may vary; (e) the physical spatial context has a small, but significant, influence on the web content that indoor users browse; and (f) accompanying users tend to access resources from the same web domains.