Search (7 results, page 1 of 1)

  • × author_ss:"Ruthven, I."
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.02
    0.016797554 = product of:
      0.067190215 = sum of:
        0.067190215 = sum of:
          0.03597966 = weight(_text_:design in 950) [ClassicSimilarity], result of:
            0.03597966 = score(doc=950,freq=2.0), product of:
              0.17322445 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.046071928 = queryNorm
              0.20770542 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
          0.031210553 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
            0.031210553 = score(doc=950,freq=2.0), product of:
              0.16133605 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046071928 = queryNorm
              0.19345059 = fieldWeight in 950, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=950)
      0.25 = coord(1/4)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  2. Elsweiler, D.; Ruthven, I.; Jones, C.: Towards memory supporting personal information management tools (2007) 0.01
    0.007632438 = product of:
      0.030529752 = sum of:
        0.030529752 = product of:
          0.061059505 = sum of:
            0.061059505 = weight(_text_:design in 5057) [ClassicSimilarity], result of:
              0.061059505 = score(doc=5057,freq=4.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.3524878 = fieldWeight in 5057, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5057)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this article, the authors discuss reretrieving personal information objects and relate the task to recovering from lapse(s) in memory. They propose that memory lapses impede users from successfully refinding the information they need. Their hypothesis is that by learning more about memory lapses in noncomputing contexts and about how people cope and recover from these lapses, we can better inform the design of personal information management (PIM) tools and improve the user's ability to reaccess and reuse objects. They describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, they present a series of principles that they hypothesize will improve the design of PIM tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to the authors' findings. The evaluation suggests that users' performance when refinding objects can be improved by building personal information management tools to support characteristics of human memory.
  3. Tombros, A.; Ruthven, I.; Jose, J.M.: How users assess Web pages for information seeking (2005) 0.01
    0.0053969487 = product of:
      0.021587795 = sum of:
        0.021587795 = product of:
          0.04317559 = sum of:
            0.04317559 = weight(_text_:design in 5255) [ClassicSimilarity], result of:
              0.04317559 = score(doc=5255,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.24924651 = fieldWeight in 5255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5255)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this article, we investigate the criteria used by online searchers when assessing the relevance of Web pages for information-seeking tasks. Twenty-four participants were given three tasks each, and they indicated the Features of Web pages that they used when deciding about the usefulness of the pages in relation to the tasks. These tasks were presented within the context of a simulated work-task situation. We investigated the relative utility of features identified by participants (Web page content, structure, and quality) and how the importance of these features is affected by the type of information-seeking task performed and the stage of the search. The results of this study provide a set of criteria used by searchers to decide about the utility of Web pages for different types of tasks. Such criteria can have implications for the design of systems that use or recommend Web pages.
  4. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 835) [ClassicSimilarity], result of:
              0.03597966 = score(doc=835,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  5. Balatsoukas, P.; Ruthven, I.: ¬An eye-tracking approach to the analysis of relevance judgments on the Web : the case of Google search engine (2012) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 379) [ClassicSimilarity], result of:
              0.03597966 = score(doc=379,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=379)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Eye movement data can provide an in-depth view of human reasoning and the decision-making process, and modern information retrieval (IR) research can benefit from the analysis of this type of data. The aim of this research was to examine the relationship between relevance criteria use and visual behavior in the context of predictive relevance judgments. To address this objective, a multimethod research design was employed that involved observation of participants' eye movements, talk-aloud protocols, and postsearch interviews. Specifically, the results reported in this article came from the analysis of 281 predictive relevance judgments made by 24 participants using the Google search engine. We present a novel stepwise methodological framework for the analysis of relevance judgments and eye movements on the Web and show new patterns of relevance criteria use during predictive relevance judgment. For example, the findings showed an effect of ranking order and surrogate components (Title, Summary, and URL) on the use of relevance criteria. Also, differences were observed in the cognitive effort spent between very relevant and not relevant judgments. We conclude with the implications of this study for IR research.
  6. Ruthven, I.: Relevance behaviour in TREC (2014) 0.00
    0.0044974573 = product of:
      0.01798983 = sum of:
        0.01798983 = product of:
          0.03597966 = sum of:
            0.03597966 = weight(_text_:design in 1785) [ClassicSimilarity], result of:
              0.03597966 = score(doc=1785,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.20770542 = fieldWeight in 1785, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1785)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation. Design/methodology/approach - Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance. Findings - The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance. Research limitations/implications - The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections. Originality/value - The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.
  7. Ruthven, I.: Integrating approaches to relevance (2005) 0.00
    0.0035979657 = product of:
      0.014391863 = sum of:
        0.014391863 = product of:
          0.028783726 = sum of:
            0.028783726 = weight(_text_:design in 638) [ClassicSimilarity], result of:
              0.028783726 = score(doc=638,freq=2.0), product of:
                0.17322445 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046071928 = queryNorm
                0.16616434 = fieldWeight in 638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.03125 = fieldNorm(doc=638)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Relevance is the distinguishing feature of IR research. It is the intricacy of relevance, and its basis in human decision-making, which defines and shapes our research field. Relevance as a concept cuts across the spectrum of information seeking and IR research from investigations into information seeking behaviours to theoretical models of IR. Given their mutual dependence on relevance we might predict a strong relationship between information seeking and retrieval in how they regard and discuss the role of relevance within our research programmes. However often, too often, information seeking and IR have been continued as independent research traditions: IR research ignoring the extensive, user-based frameworks developed by information seeking and information seeking underestimating the influence of IR systems and interfaces within the information seeking process. When these two disciplines come together we often find the strongest research, research that is motivated by an understanding of what cognitive processes require support during information seeking, and an understanding of how this support might be provided by an IR system. The aim of this chapter is to investigate this common ground of research, in particular to examine the central notion of relevance that underpins both information seeking and IR research. It seeks to investigate how our understanding of relevance as a process of human decision making can, and might, influence our design of interactive IR systems. It does not cover every area of IR research, or each area in the same depth; rather we try to single out the areas where the nature of relevance, and its implications, is driving the research agenda. We start by providing a brief introduction to how relevance has been treated so far in the literature and then consider the key areas where issues of relevance are of current concern. Specifically the chapter discusses the difficulties of making and interpreting relevance assessments, the role and meaning of differentiated relevance assessments, the specific role of time within information seeking, and the large, complex issue of relevance within evaluations of IR systems. In each area we try to establish where the two fields of IR and information seeking are establishing fruitful collaborations, where there is a gap for prospective collaboration and the possible difficulties in establishing mutual aims.