Search (10 results, page 1 of 1)

  • × author_ss:"Ruthven, I."
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.04
    0.035437185 = product of:
      0.053155776 = sum of:
        0.035974823 = weight(_text_:based in 950) [ClassicSimilarity], result of:
          0.035974823 = score(doc=950,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 950, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.017180953 = product of:
          0.034361906 = sum of:
            0.034361906 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.034361906 = score(doc=950,freq=2.0), product of:
                0.17762627 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050723847 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  2. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.03
    0.029866494 = product of:
      0.04479974 = sum of:
        0.020350434 = weight(_text_:based in 2019) [ClassicSimilarity], result of:
          0.020350434 = score(doc=2019,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.13315678 = fieldWeight in 2019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
        0.024449307 = product of:
          0.048898615 = sum of:
            0.048898615 = weight(_text_:training in 2019) [ClassicSimilarity], result of:
              0.048898615 = score(doc=2019,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.20640713 = fieldWeight in 2019, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2019)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
  3. Lalmas, M.; Ruthven, I.: Representing and retrieving structured documents using the Dempster-Shafer theory of evidence : modelling and evaluation (1998) 0.01
    0.011871087 = product of:
      0.03561326 = sum of:
        0.03561326 = weight(_text_:based in 1076) [ClassicSimilarity], result of:
          0.03561326 = score(doc=1076,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23302436 = fieldWeight in 1076, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on a theoretical model of structured document indexing and retrieval based on the Dempster-Schafer Theory of Evidence. Includes a description of the model of structured document retrieval, the representation of structured documents, the representation of individual components, how components are combined, details of the combination process, and how relevance is captured within the model. Also presents a detailed account of an implementation of the model, and an evaluation scheme designed to test the effectiveness of the model
  4. Ruthven, I.; Baillie, M.; Azzopardi, L.; Bierig, R.; Nicol, E.; Sweeney, S.; Yaciki, M.: Contextual factors affecting the utility of surrogates within exploratory search (2008) 0.01
    0.011871087 = product of:
      0.03561326 = sum of:
        0.03561326 = weight(_text_:based in 2042) [ClassicSimilarity], result of:
          0.03561326 = score(doc=2042,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23302436 = fieldWeight in 2042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2042)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper we investigate how information surrogates might be useful in exploratory search and what information it is useful for a surrogate to contain. By comparing assessments based on artificially created information surrogates, we investigate the effect of the source of information, the quality of an information source and the date of information upon the assessment process. We also investigate how varying levels of topical knowledge, assessor confidence and prior expectation affect the assessment of information surrogates. We show that both types of contextual information affect how the information surrogates are judged and what actions are performed as a result of the surrogates.
  5. Ruthven, I.: Resonance and the experience of relevance (2021) 0.01
    0.011871087 = product of:
      0.03561326 = sum of:
        0.03561326 = weight(_text_:based in 211) [ClassicSimilarity], result of:
          0.03561326 = score(doc=211,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23302436 = fieldWeight in 211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=211)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, I propose the concept of resonance as a useful one for describing what it means to experience relevance. Based on an extensive interdisciplinary review, I provide a novel framework that presents resonance as a spectrum of experience with a multitude of outcomes ranging from a sense of harmony and coherence to life transformation. I argue that resonance has different properties to the more traditional interpretation of relevance and provides a better system of explanation of what it means to experience relevance. I show how traditional approaches to relevance and resonance work in a complementary fashion and outline how resonance may present distinct new lines of research into relevance theory.
  6. Elsweiler, D.; Ruthven, I.; Jones, C.: Towards memory supporting personal information management tools (2007) 0.01
    0.010175217 = product of:
      0.03052565 = sum of:
        0.03052565 = weight(_text_:based in 5057) [ClassicSimilarity], result of:
          0.03052565 = score(doc=5057,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 5057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=5057)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, the authors discuss reretrieving personal information objects and relate the task to recovering from lapse(s) in memory. They propose that memory lapses impede users from successfully refinding the information they need. Their hypothesis is that by learning more about memory lapses in noncomputing contexts and about how people cope and recover from these lapses, we can better inform the design of personal information management (PIM) tools and improve the user's ability to reaccess and reuse objects. They describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, they present a series of principles that they hypothesize will improve the design of PIM tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to the authors' findings. The evaluation suggests that users' performance when refinding objects can be improved by building personal information management tools to support characteristics of human memory.
  7. Baillie, M.; Azzopardi, L.; Ruthven, I.: Evaluating epistemic uncertainty under incomplete assessments (2008) 0.01
    0.010175217 = product of:
      0.03052565 = sum of:
        0.03052565 = weight(_text_:based in 2065) [ClassicSimilarity], result of:
          0.03052565 = score(doc=2065,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.19973516 = fieldWeight in 2065, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2065)
      0.33333334 = coord(1/3)
    
    Abstract
    The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty - the amount of knowledge (or ignorance) we have about the estimate of a system's performance - during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.
  8. Ruthven, I.; Lalmas, M.; Rijsbergen, K. van: Combining and selecting characteristics of information use (2002) 0.01
    0.009593287 = product of:
      0.028779859 = sum of:
        0.028779859 = weight(_text_:based in 5208) [ClassicSimilarity], result of:
          0.028779859 = score(doc=5208,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.18831211 = fieldWeight in 5208, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=5208)
      0.33333334 = coord(1/3)
    
    Abstract
    Ruthven, Lalmas, and van Rijsbergen use traditional term importance measures like inverse document frequency, noise, based upon in-document frequency, and term frequency supplemented by theme value which is calculated from differences of expected positions of words in a text from their actual positions, on the assumption that even distribution indicates term association with a main topic, and context, which is based on a query term's distance from the nearest other query term relative to the average expected distribution of all query terms in the document. They then define document characteristics like specificity, the sum of all idf values in a document over the total terms in the document, or document complexity, measured by the documents average idf value; and information to noise ratio, info-noise, tokens after stopping and stemming over tokens before these processes, measuring the ratio of useful and non-useful information in a document. Retrieval tests are then carried out using each characteristic, combinations of the characteristics, and relevance feedback to determine the correct combination of characteristics. A file ranks independently of query terms by both specificity and info-noise, but if presence of a query term is required unique rankings are generated. Tested on five standard collections the traditional characteristics out preformed the new characteristics, which did, however, out preform random retrieval. All possible combinations of characteristics were also tested both with and without a set of scaling weights applied. All characteristics can benefit by combination with another characteristic or set of characteristics and performance as a single characteristic is a good indicator of performance in combination. Larger combinations tended to be more effective than smaller ones and weighting increased precision measures of middle ranking combinations but decreased the ranking of poorer combinations. The best combinations vary for each collection, and in some collections with the addition of weighting. Finally, with all documents ranked by the all characteristics combination, they take the top 30 documents and calculate the characteristic scores for each term in both the relevant and the non-relevant sets. Then taking for each query term the characteristics whose average was higher for relevant than non-relevant documents the documents are re-ranked. The relevance feedback method of selecting characteristics can select a good set of characteristics for query terms.
  9. Ruthven, I.: ¬The language of information need : differentiating conscious and formalized information needs (2019) 0.01
    0.008479347 = product of:
      0.025438042 = sum of:
        0.025438042 = weight(_text_:based in 5035) [ClassicSimilarity], result of:
          0.025438042 = score(doc=5035,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.16644597 = fieldWeight in 5035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5035)
      0.33333334 = coord(1/3)
    
    Abstract
    Information need is a fundamental concept within Information Science. Robert Taylor's seminal contribution in 1968 was to propose a division of information needs into four levels: the visceral, conscious, formalized and compromised levels of information need. Taylor's contribution has provided much inspiration to Information Science research but this has largely remained at the discursive and conceptual level. In this paper, we present a novel empirical investigation of Taylor's information need classification. We analyse the linguistic differences between conscious and formalized needs using several hundred postings to four major Internet discussion groups. We show that descriptions of conscious needs are more emotional in tone, involve more sensory perception and contain different temporal dimensions than descriptions of formalized needs. We show that it is possible to differentiate levels of information need based on linguistic patterns and that the language used to express information needs can reflect an individual's understanding of their information problem. This has implications for the theory of information needs and practical implications for supporting moderators of online news groups in responding to information needs and for developing automated support for classifying information needs.
  10. Ruthven, I.: Integrating approaches to relevance (2005) 0.01
    0.006783478 = product of:
      0.020350434 = sum of:
        0.020350434 = weight(_text_:based in 638) [ClassicSimilarity], result of:
          0.020350434 = score(doc=638,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.13315678 = fieldWeight in 638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=638)
      0.33333334 = coord(1/3)
    
    Abstract
    Relevance is the distinguishing feature of IR research. It is the intricacy of relevance, and its basis in human decision-making, which defines and shapes our research field. Relevance as a concept cuts across the spectrum of information seeking and IR research from investigations into information seeking behaviours to theoretical models of IR. Given their mutual dependence on relevance we might predict a strong relationship between information seeking and retrieval in how they regard and discuss the role of relevance within our research programmes. However often, too often, information seeking and IR have been continued as independent research traditions: IR research ignoring the extensive, user-based frameworks developed by information seeking and information seeking underestimating the influence of IR systems and interfaces within the information seeking process. When these two disciplines come together we often find the strongest research, research that is motivated by an understanding of what cognitive processes require support during information seeking, and an understanding of how this support might be provided by an IR system. The aim of this chapter is to investigate this common ground of research, in particular to examine the central notion of relevance that underpins both information seeking and IR research. It seeks to investigate how our understanding of relevance as a process of human decision making can, and might, influence our design of interactive IR systems. It does not cover every area of IR research, or each area in the same depth; rather we try to single out the areas where the nature of relevance, and its implications, is driving the research agenda. We start by providing a brief introduction to how relevance has been treated so far in the literature and then consider the key areas where issues of relevance are of current concern. Specifically the chapter discusses the difficulties of making and interpreting relevance assessments, the role and meaning of differentiated relevance assessments, the specific role of time within information seeking, and the large, complex issue of relevance within evaluations of IR systems. In each area we try to establish where the two fields of IR and information seeking are establishing fruitful collaborations, where there is a gap for prospective collaboration and the possible difficulties in establishing mutual aims.