Search (12 results, page 1 of 1)

  • × author_ss:"Ruthven, I."
  1. Ruthven, I.: Relevance behaviour in TREC (2014) 0.02
    0.022998964 = product of:
      0.08049637 = sum of:
        0.04413724 = weight(_text_:case in 1785) [ClassicSimilarity], result of:
          0.04413724 = score(doc=1785,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 1785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1785)
        0.03635913 = product of:
          0.07271826 = sum of:
            0.07271826 = weight(_text_:studies in 1785) [ClassicSimilarity], result of:
              0.07271826 = score(doc=1785,freq=8.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.44086722 = fieldWeight in 1785, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1785)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation. Design/methodology/approach - Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance. Findings - The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance. Research limitations/implications - The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections. Originality/value - The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.
  2. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.01
    0.010085231 = product of:
      0.035298306 = sum of:
        0.020754656 = weight(_text_:management in 2019) [ClassicSimilarity], result of:
          0.020754656 = score(doc=2019,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.14896142 = fieldWeight in 2019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
        0.014543652 = product of:
          0.029087303 = sum of:
            0.029087303 = weight(_text_:studies in 2019) [ClassicSimilarity], result of:
              0.029087303 = score(doc=2019,freq=2.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.17634688 = fieldWeight in 2019, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2019)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
    Source
    Information processing and management. 44(2008) no.1, S.1-3
  3. Elsweiler, D.; Ruthven, I.; Jones, C.: Towards memory supporting personal information management tools (2007) 0.01
    0.007703169 = product of:
      0.05392218 = sum of:
        0.05392218 = weight(_text_:management in 5057) [ClassicSimilarity], result of:
          0.05392218 = score(doc=5057,freq=6.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.38701317 = fieldWeight in 5057, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=5057)
      0.14285715 = coord(1/7)
    
    Abstract
    In this article, the authors discuss reretrieving personal information objects and relate the task to recovering from lapse(s) in memory. They propose that memory lapses impede users from successfully refinding the information they need. Their hypothesis is that by learning more about memory lapses in noncomputing contexts and about how people cope and recover from these lapses, we can better inform the design of personal information management (PIM) tools and improve the user's ability to reaccess and reuse objects. They describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, they present a series of principles that they hypothesize will improve the design of PIM tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to the authors' findings. The evaluation suggests that users' performance when refinding objects can be improved by building personal information management tools to support characteristics of human memory.
  4. Balatsoukas, P.; Ruthven, I.: ¬An eye-tracking approach to the analysis of relevance judgments on the Web : the case of Google search engine (2012) 0.01
    0.00630532 = product of:
      0.04413724 = sum of:
        0.04413724 = weight(_text_:case in 379) [ClassicSimilarity], result of:
          0.04413724 = score(doc=379,freq=2.0), product of:
            0.18173204 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.041336425 = queryNorm
            0.24286987 = fieldWeight in 379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=379)
      0.14285715 = coord(1/7)
    
  5. Ruthven, I.; Baillie, M.; Azzopardi, L.; Bierig, R.; Nicol, E.; Sweeney, S.; Yaciki, M.: Contextual factors affecting the utility of surrogates within exploratory search (2008) 0.01
    0.005188664 = product of:
      0.036320645 = sum of:
        0.036320645 = weight(_text_:management in 2042) [ClassicSimilarity], result of:
          0.036320645 = score(doc=2042,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.2606825 = fieldWeight in 2042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2042)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 44(2008) no.2, S.437-462
  6. White, R.W.; Jose, J.M.; Ruthven, I.: ¬An implicit feedback approach for interactive information retrieval (2006) 0.00
    0.004447426 = product of:
      0.031131983 = sum of:
        0.031131983 = weight(_text_:management in 964) [ClassicSimilarity], result of:
          0.031131983 = score(doc=964,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=964)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 42(2006) no.1, S.166-190
  7. Baillie, M.; Azzopardi, L.; Ruthven, I.: Evaluating epistemic uncertainty under incomplete assessments (2008) 0.00
    0.004447426 = product of:
      0.031131983 = sum of:
        0.031131983 = weight(_text_:management in 2065) [ClassicSimilarity], result of:
          0.031131983 = score(doc=2065,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.22344214 = fieldWeight in 2065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.046875 = fieldNorm(doc=2065)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 44(2008) no.2, S.811-837
  8. Ruthven, I.; Buchanan, S.; Jardine, C.: Relationships, environment, health and development : the information needs expressed online by young first-time mothers (2018) 0.00
    0.0044073923 = product of:
      0.030851744 = sum of:
        0.030851744 = product of:
          0.06170349 = sum of:
            0.06170349 = weight(_text_:studies in 4369) [ClassicSimilarity], result of:
              0.06170349 = score(doc=4369,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.37408823 = fieldWeight in 4369, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4369)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    This study investigates the information needs of young first time mothers through a qualitative content analysis of 266 selected posts to a major online discussion group. Our analysis reveals three main categories of need: needs around how to create a positive environment for a child, needs around a mother's relationships and well-being and needs around child development and health. We demonstrate the similarities of this scheme to needs uncovered in other studies and how our classification of needs is more comprehensive than those in previous studies. A critical distinction in our results is between two types of need presentation, distinguishing between situational and informational needs. Situational needs are narrative descriptions of a problematic situations whereas informational needs are need statements with a clear request. Distinguishing between these two types of needs sheds new light on how information needs develop. We conclude with a discussion on the implication of our results for young mothers and information providers.
  9. White, R.W.; Jose, J.M.; Ruthven, I.: ¬A task-oriented study on the influencing effects of query-biased summarisation in web searching (2003) 0.00
    0.0037061884 = product of:
      0.025943318 = sum of:
        0.025943318 = weight(_text_:management in 1081) [ClassicSimilarity], result of:
          0.025943318 = score(doc=1081,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 1081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 39(2003) no.5, S.689-706
  10. Ruthven, I.: ¬The language of information need : differentiating conscious and formalized information needs (2019) 0.00
    0.0037061884 = product of:
      0.025943318 = sum of:
        0.025943318 = weight(_text_:management in 5035) [ClassicSimilarity], result of:
          0.025943318 = score(doc=5035,freq=2.0), product of:
            0.13932906 = queryWeight, product of:
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.041336425 = queryNorm
            0.18620178 = fieldWeight in 5035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.3706124 = idf(docFreq=4130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5035)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 56(2019) no.1, S.77-90
  11. White, R.W.; Jose, J.M.; Ruthven, I.: Using top-ranking sentences to facilitate effective information access (2005) 0.00
    0.003672827 = product of:
      0.025709787 = sum of:
        0.025709787 = product of:
          0.051419575 = sum of:
            0.051419575 = weight(_text_:studies in 3881) [ClassicSimilarity], result of:
              0.051419575 = score(doc=3881,freq=4.0), product of:
                0.16494368 = queryWeight, product of:
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.041336425 = queryNorm
                0.3117402 = fieldWeight in 3881, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9902744 = idf(docFreq=2222, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3881)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Web searchers typically fall to view search results beyond the first page nor fully examine those results presented to them. In this article we describe an approach that encourages a deeper examination of the contents of the document set retrieved in response to a searcher's query. The approach shifts the focus of perusal and interaction away from potentially uninformative document surrogates (such as titles, sentence fragments, and URLs) to actual document content, and uses this content to drive the information seeking process. Current search interfaces assume searchers examine results document-by-document. In contrast our approach extracts, ranks, and presents the contents of the top-ranked document set. We use query-relevant topranking sentences extracted from the top documents at retrieval time as fine-grained representations of topranked document content and, when combined in a ranked list, an overview of these documents. The interaction of the searcher provides implicit evidence that is used to reorder the sentences where appropriate. We evaluate our approach in three separate user studies, each applying these sentences in a different way. The findings of these studies show that top-ranking sentences can facilitate effective information access.
  12. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.00
    0.002000184 = product of:
      0.0140012875 = sum of:
        0.0140012875 = product of:
          0.028002575 = sum of:
            0.028002575 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.028002575 = score(doc=950,freq=2.0), product of:
                0.14475311 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041336425 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 4.2023 19:27:56