Search (9 results, page 1 of 1)

  • × author_ss:"Ruthven, I."
  1. Tombros, A.; Ruthven, I.; Jose, J.M.: How users assess Web pages for information seeking (2005) 0.01
    0.005051911 = product of:
      0.030311465 = sum of:
        0.030311465 = product of:
          0.06062293 = sum of:
            0.06062293 = weight(_text_:web in 5255) [ClassicSimilarity], result of:
              0.06062293 = score(doc=5255,freq=12.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.5299281 = fieldWeight in 5255, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5255)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article, we investigate the criteria used by online searchers when assessing the relevance of Web pages for information-seeking tasks. Twenty-four participants were given three tasks each, and they indicated the Features of Web pages that they used when deciding about the usefulness of the pages in relation to the tasks. These tasks were presented within the context of a simulated work-task situation. We investigated the relative utility of features identified by participants (Web page content, structure, and quality) and how the importance of these features is affected by the type of information-seeking task performed and the stage of the search. The results of this study provide a set of criteria used by searchers to decide about the utility of Web pages for different types of tasks. Such criteria can have implications for the design of systems that use or recommend Web pages.
  2. White, R.W.; Jose, J.M.; Ruthven, I.: ¬A task-oriented study on the influencing effects of query-biased summarisation in web searching (2003) 0.00
    0.003843119 = product of:
      0.023058712 = sum of:
        0.023058712 = product of:
          0.046117425 = sum of:
            0.046117425 = weight(_text_:web in 1081) [ClassicSimilarity], result of:
              0.046117425 = score(doc=1081,freq=10.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.40312994 = fieldWeight in 1081, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1081)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    The aim of the work described in this paper is to evaluate the influencing effects of query-biased summaries in web searching. For this purpose, a summarisation system has been developed, and a summary tailored to the user's query is generated automatically for each document retrieved. The system aims to provide both a better means of assessing document relevance than titles or abstracts typical of many web search result lists. Through visiting each result page at retrieval-time, the system provides the user with an idea of the current page content and thus deals with the dynamic nature of the web. To examine the effectiveness of this approach, a task-oriented, comparative evaluation between four different web retrieval systems was performed; two that use query-biased summarisation, and two that use the standard ranked titles/abstracts approach. The results from the evaluation indicate that query-biased summarisation techniques appear to be more useful and effective in helping users gauge document relevance than the traditional ranked titles/abstracts approach. The same methodology was used to compare the effectiveness of two of the web's major search engines; AltaVista and Google.
  3. Balatsoukas, P.; Ruthven, I.: ¬An eye-tracking approach to the analysis of relevance judgments on the Web : the case of Google search engine (2012) 0.00
    0.0024306017 = product of:
      0.01458361 = sum of:
        0.01458361 = product of:
          0.02916722 = sum of:
            0.02916722 = weight(_text_:web in 379) [ClassicSimilarity], result of:
              0.02916722 = score(doc=379,freq=4.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.25496176 = fieldWeight in 379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=379)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Eye movement data can provide an in-depth view of human reasoning and the decision-making process, and modern information retrieval (IR) research can benefit from the analysis of this type of data. The aim of this research was to examine the relationship between relevance criteria use and visual behavior in the context of predictive relevance judgments. To address this objective, a multimethod research design was employed that involved observation of participants' eye movements, talk-aloud protocols, and postsearch interviews. Specifically, the results reported in this article came from the analysis of 281 predictive relevance judgments made by 24 participants using the Google search engine. We present a novel stepwise methodological framework for the analysis of relevance judgments and eye movements on the Web and show new patterns of relevance criteria use during predictive relevance judgment. For example, the findings showed an effect of ranking order and surrogate components (Title, Summary, and URL) on the use of relevance criteria. Also, differences were observed in the cognitive effort spent between very relevant and not relevant judgments. We conclude with the implications of this study for IR research.
  4. Ruthven, I.; Baillie, M.; Azzopardi, L.; Bierig, R.; Nicol, E.; Sweeney, S.; Yaciki, M.: Contextual factors affecting the utility of surrogates within exploratory search (2008) 0.00
    0.0018637171 = product of:
      0.011182303 = sum of:
        0.011182303 = product of:
          0.033546906 = sum of:
            0.033546906 = weight(_text_:29 in 2042) [ClassicSimilarity], result of:
              0.033546906 = score(doc=2042,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.27205724 = fieldWeight in 2042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2042)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 7.2008 12:28:27
  5. White, R.W.; Jose, J.M.; Ruthven, I.: Using top-ranking sentences to facilitate effective information access (2005) 0.00
    0.001718695 = product of:
      0.01031217 = sum of:
        0.01031217 = product of:
          0.02062434 = sum of:
            0.02062434 = weight(_text_:web in 3881) [ClassicSimilarity], result of:
              0.02062434 = score(doc=3881,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.18028519 = fieldWeight in 3881, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3881)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Web searchers typically fall to view search results beyond the first page nor fully examine those results presented to them. In this article we describe an approach that encourages a deeper examination of the contents of the document set retrieved in response to a searcher's query. The approach shifts the focus of perusal and interaction away from potentially uninformative document surrogates (such as titles, sentence fragments, and URLs) to actual document content, and uses this content to drive the information seeking process. Current search interfaces assume searchers examine results document-by-document. In contrast our approach extracts, ranks, and presents the contents of the top-ranked document set. We use query-relevant topranking sentences extracted from the top documents at retrieval time as fine-grained representations of topranked document content and, when combined in a ranked list, an overview of these documents. The interaction of the searcher provides implicit evidence that is used to reorder the sentences where appropriate. We evaluate our approach in three separate user studies, each applying these sentences in a different way. The findings of these studies show that top-ranking sentences can facilitate effective information access.
  6. Ruthven, I.; Buchanan, S.; Jardine, C.: Relationships, environment, health and development : the information needs expressed online by young first-time mothers (2018) 0.00
    0.0015974719 = product of:
      0.009584831 = sum of:
        0.009584831 = product of:
          0.028754493 = sum of:
            0.028754493 = weight(_text_:29 in 4369) [ClassicSimilarity], result of:
              0.028754493 = score(doc=4369,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23319192 = fieldWeight in 4369, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4369)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 7.2018 9:47:05
  7. Ruthven, I.; Buchanan, S.; Jardine, C.: Isolated, overwhelmed, and worried : young first-time mothers asking for information and support online (2018) 0.00
    0.0015974719 = product of:
      0.009584831 = sum of:
        0.009584831 = product of:
          0.028754493 = sum of:
            0.028754493 = weight(_text_:29 in 4455) [ClassicSimilarity], result of:
              0.028754493 = score(doc=4455,freq=2.0), product of:
                0.12330827 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03505379 = queryNorm
                0.23319192 = fieldWeight in 4455, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4455)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 9.2018 11:25:14
  8. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.00
    0.001374956 = product of:
      0.008249735 = sum of:
        0.008249735 = product of:
          0.01649947 = sum of:
            0.01649947 = weight(_text_:web in 2019) [ClassicSimilarity], result of:
              0.01649947 = score(doc=2019,freq=2.0), product of:
                0.11439841 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03505379 = queryNorm
                0.14422815 = fieldWeight in 2019, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2019)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
  9. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.00
    0.0013192514 = product of:
      0.007915508 = sum of:
        0.007915508 = product of:
          0.023746524 = sum of:
            0.023746524 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.023746524 = score(doc=950,freq=2.0), product of:
                0.1227524 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03505379 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 4.2023 19:27:56