Search (23 results, page 1 of 2)

  • × author_ss:"Ruthven, I."
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.02
    0.020160122 = product of:
      0.030240182 = sum of:
        0.013336393 = weight(_text_:on in 950) [ClassicSimilarity], result of:
          0.013336393 = score(doc=950,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.01690379 = product of:
          0.03380758 = sum of:
            0.03380758 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.03380758 = score(doc=950,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  2. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.01
    0.012573673 = product of:
      0.03772102 = sum of:
        0.03772102 = weight(_text_:on in 7522) [ClassicSimilarity], result of:
          0.03772102 = score(doc=7522,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 7522, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=7522)
      0.33333334 = coord(1/3)
    
  3. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.01
    0.012573673 = product of:
      0.03772102 = sum of:
        0.03772102 = weight(_text_:on in 835) [ClassicSimilarity], result of:
          0.03772102 = score(doc=835,freq=16.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3436586 = fieldWeight in 835, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=835)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  4. Sanderson, M.; Ruthven, I.: Report on the Glasgow IR group (glair4) submission (1997) 0.01
    0.010669115 = product of:
      0.032007344 = sum of:
        0.032007344 = weight(_text_:on in 3088) [ClassicSimilarity], result of:
          0.032007344 = score(doc=3088,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 3088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=3088)
      0.33333334 = coord(1/3)
    
  5. Lalmas, M.; Ruthven, I.: ¬A model for structured document retrieval : empirical investigations (1997) 0.01
    0.010058938 = product of:
      0.030176813 = sum of:
        0.030176813 = weight(_text_:on in 727) [ClassicSimilarity], result of:
          0.030176813 = score(doc=727,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.27492687 = fieldWeight in 727, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=727)
      0.33333334 = coord(1/3)
    
    Abstract
    Documents often display a structure, e.g. several sections, each with several subsections and so on. Taking into account the structure of a document allows the retrieval process to focus on those parts of the document that are most relevant to an information need. In previous work, we developed a model for the representation and the retrieval of structured documents. This paper reports the first experimental study of the effectiveness and applicability of the model
  6. Lalmas, M.; Ruthven, I.: Representing and retrieving structured documents using the Dempster-Shafer theory of evidence : modelling and evaluation (1998) 0.01
    0.008801571 = product of:
      0.026404712 = sum of:
        0.026404712 = weight(_text_:on in 1076) [ClassicSimilarity], result of:
          0.026404712 = score(doc=1076,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 1076, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1076)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on a theoretical model of structured document indexing and retrieval based on the Dempster-Schafer Theory of Evidence. Includes a description of the model of structured document retrieval, the representation of structured documents, the representation of individual components, how components are combined, details of the combination process, and how relevance is captured within the model. Also presents a detailed account of an implementation of the model, and an evaluation scheme designed to test the effectiveness of the model
  7. Ruthven, I.: ¬An information behavior theory of transitions (2022) 0.01
    0.008801571 = product of:
      0.026404712 = sum of:
        0.026404712 = weight(_text_:on in 530) [ClassicSimilarity], result of:
          0.026404712 = score(doc=530,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper proposes a theory of life transitions focused on information behavior. Through a process of meta-ethnography, the paper transforms a series of influential theories and models into a theory of transitions for use in Information Science. This paper characterizes the psychological processes involved in transitions as consisting of three main stages, Understanding, Negotiating, and Resolving, each of which have qualitatively different information behaviors and which require different types of information support. The paper discusses the theoretical implications of this theory and proposes ways in which the theory can be used to provide practical support for those undergoing transitions.
    Series
    JASIS&Tspecial issue on information behavior and information practices theory
  8. Balatsoukas, P.; Ruthven, I.: ¬An eye-tracking approach to the analysis of relevance judgments on the Web : the case of Google search engine (2012) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 379) [ClassicSimilarity], result of:
          0.02309931 = score(doc=379,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 379, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=379)
      0.33333334 = coord(1/3)
    
    Abstract
    Eye movement data can provide an in-depth view of human reasoning and the decision-making process, and modern information retrieval (IR) research can benefit from the analysis of this type of data. The aim of this research was to examine the relationship between relevance criteria use and visual behavior in the context of predictive relevance judgments. To address this objective, a multimethod research design was employed that involved observation of participants' eye movements, talk-aloud protocols, and postsearch interviews. Specifically, the results reported in this article came from the analysis of 281 predictive relevance judgments made by 24 participants using the Google search engine. We present a novel stepwise methodological framework for the analysis of relevance judgments and eye movements on the Web and show new patterns of relevance criteria use during predictive relevance judgment. For example, the findings showed an effect of ranking order and surrogate components (Title, Summary, and URL) on the use of relevance criteria. Also, differences were observed in the cognitive effort spent between very relevant and not relevant judgments. We conclude with the implications of this study for IR research.
  9. Ruthven, I.: Relevance behaviour in TREC (2014) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 1785) [ClassicSimilarity], result of:
          0.02309931 = score(doc=1785,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 1785, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1785)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation. Design/methodology/approach - Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance. Findings - The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance. Research limitations/implications - The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections. Originality/value - The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.
  10. Ruthven, I.; Buchanan, S.; Jardine, C.: Relationships, environment, health and development : the information needs expressed online by young first-time mothers (2018) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 4369) [ClassicSimilarity], result of:
          0.02263261 = score(doc=4369,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 4369, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4369)
      0.33333334 = coord(1/3)
    
    Abstract
    This study investigates the information needs of young first time mothers through a qualitative content analysis of 266 selected posts to a major online discussion group. Our analysis reveals three main categories of need: needs around how to create a positive environment for a child, needs around a mother's relationships and well-being and needs around child development and health. We demonstrate the similarities of this scheme to needs uncovered in other studies and how our classification of needs is more comprehensive than those in previous studies. A critical distinction in our results is between two types of need presentation, distinguishing between situational and informational needs. Situational needs are narrative descriptions of a problematic situations whereas informational needs are need statements with a clear request. Distinguishing between these two types of needs sheds new light on how information needs develop. We conclude with a discussion on the implication of our results for young mothers and information providers.
  11. White, R.W.; Ruthven, I.: ¬A study of interface support mechanisms for interactive information retrieval (2006) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 5064) [ClassicSimilarity], result of:
          0.01886051 = score(doc=5064,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 5064, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5064)
      0.33333334 = coord(1/3)
    
    Abstract
    Advances in search technology have meant that search systems can now offer assistance to users beyond simply retrieving a set of documents. For example, search systems are now capable of inferring user interests by observing their interaction, offering suggestions about what terms could be used in a query, or reorganizing search results to make exploration of retrieved material more effective. When providing new search functionality, system designers must decide how the new functionality should be offered to users. One major choice is between (a) offering automatic features that require little human input, but give little human control; or (b) interactive features which allow human control over how the feature is used, but often give little guidance over how the feature should be best used. This article presents a study in which we empirically investigate the issue of control by presenting an experiment in which participants were asked to interact with three experimental systems that vary the degree of control they had in creating queries, indicating which results are relevant in making search decisions. We use our findings to discuss why and how the control users want over search decisions can vary depending on the nature of the decisions and the impact of those decisions on the user's search.
  12. Ruthven, I.; Baillie, M.; Azzopardi, L.; Bierig, R.; Nicol, E.; Sweeney, S.; Yaciki, M.: Contextual factors affecting the utility of surrogates within exploratory search (2008) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 2042) [ClassicSimilarity], result of:
          0.01867095 = score(doc=2042,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 2042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2042)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper we investigate how information surrogates might be useful in exploratory search and what information it is useful for a surrogate to contain. By comparing assessments based on artificially created information surrogates, we investigate the effect of the source of information, the quality of an information source and the date of information upon the assessment process. We also investigate how varying levels of topical knowledge, assessor confidence and prior expectation affect the assessment of information surrogates. We show that both types of contextual information affect how the information surrogates are judged and what actions are performed as a result of the surrogates.
  13. Tinto, F.; Ruthven, I.: Sharing "happy" information (2016) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 3104) [ClassicSimilarity], result of:
          0.01867095 = score(doc=3104,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 3104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3104)
      0.33333334 = coord(1/3)
    
    Abstract
    This study focuses on the sharing of "happy" information: information that creates a sense of happiness within the individual sharing the information. We explore the range of factors motivating and impacting individuals' happy information-sharing behavior within a casual leisure context through 30 semistructured interviews. The findings reveal that the factors influencing individuals' happy information-sharing behavior are numerous, and impact each other. Most individuals considered sharing happy information important to their friendships and relationships. In various contexts the act of sharing happy information was shown to enhance the sharer's happiness.
  14. Ruthven, I.: Resonance and the experience of relevance (2021) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 211) [ClassicSimilarity], result of:
          0.01867095 = score(doc=211,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=211)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, I propose the concept of resonance as a useful one for describing what it means to experience relevance. Based on an extensive interdisciplinary review, I provide a novel framework that presents resonance as a spectrum of experience with a multitude of outcomes ranging from a sense of harmony and coherence to life transformation. I argue that resonance has different properties to the more traditional interpretation of relevance and provides a better system of explanation of what it means to experience relevance. I show how traditional approaches to relevance and resonance work in a complementary fashion and outline how resonance may present distinct new lines of research into relevance theory.
  15. Ruthven, I.; Lalmas, M.; Rijsbergen, K. van: Combining and selecting characteristics of information use (2002) 0.01
    0.006159817 = product of:
      0.01847945 = sum of:
        0.01847945 = weight(_text_:on in 5208) [ClassicSimilarity], result of:
          0.01847945 = score(doc=5208,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 5208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=5208)
      0.33333334 = coord(1/3)
    
    Abstract
    Ruthven, Lalmas, and van Rijsbergen use traditional term importance measures like inverse document frequency, noise, based upon in-document frequency, and term frequency supplemented by theme value which is calculated from differences of expected positions of words in a text from their actual positions, on the assumption that even distribution indicates term association with a main topic, and context, which is based on a query term's distance from the nearest other query term relative to the average expected distribution of all query terms in the document. They then define document characteristics like specificity, the sum of all idf values in a document over the total terms in the document, or document complexity, measured by the documents average idf value; and information to noise ratio, info-noise, tokens after stopping and stemming over tokens before these processes, measuring the ratio of useful and non-useful information in a document. Retrieval tests are then carried out using each characteristic, combinations of the characteristics, and relevance feedback to determine the correct combination of characteristics. A file ranks independently of query terms by both specificity and info-noise, but if presence of a query term is required unique rankings are generated. Tested on five standard collections the traditional characteristics out preformed the new characteristics, which did, however, out preform random retrieval. All possible combinations of characteristics were also tested both with and without a set of scaling weights applied. All characteristics can benefit by combination with another characteristic or set of characteristics and performance as a single characteristic is a good indicator of performance in combination. Larger combinations tended to be more effective than smaller ones and weighting increased precision measures of middle ranking combinations but decreased the ranking of poorer combinations. The best combinations vary for each collection, and in some collections with the addition of weighting. Finally, with all documents ranked by the all characteristics combination, they take the top 30 documents and calculate the characteristic scores for each term in both the relevant and the non-relevant sets. Then taking for each query term the characteristics whose average was higher for relevant than non-relevant documents the documents are re-ranked. The relevance feedback method of selecting characteristics can select a good set of characteristics for query terms.
  16. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.01
    0.006159817 = product of:
      0.01847945 = sum of:
        0.01847945 = weight(_text_:on in 2019) [ClassicSimilarity], result of:
          0.01847945 = score(doc=2019,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 2019, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
      0.33333334 = coord(1/3)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
  17. Elsweiler, D.; Ruthven, I.; Jones, C.: Towards memory supporting personal information management tools (2007) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 5057) [ClassicSimilarity], result of:
          0.016003672 = score(doc=5057,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 5057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5057)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, the authors discuss reretrieving personal information objects and relate the task to recovering from lapse(s) in memory. They propose that memory lapses impede users from successfully refinding the information they need. Their hypothesis is that by learning more about memory lapses in noncomputing contexts and about how people cope and recover from these lapses, we can better inform the design of personal information management (PIM) tools and improve the user's ability to reaccess and reuse objects. They describe a diary study that investigates the everyday memory problems of 25 people from a wide range of backgrounds. Based on the findings, they present a series of principles that they hypothesize will improve the design of PIM tools. This hypothesis is validated by an evaluation of a tool for managing personal photographs, which was designed with respect to the authors' findings. The evaluation suggests that users' performance when refinding objects can be improved by building personal information management tools to support characteristics of human memory.
  18. White, R.W.; Jose, J.M.; Ruthven, I.: ¬An implicit feedback approach for interactive information retrieval (2006) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 964) [ClassicSimilarity], result of:
          0.016003672 = score(doc=964,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 964, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=964)
      0.33333334 = coord(1/3)
    
    Abstract
    Searchers can face problems finding the information they seek. One reason for this is that they may have difficulty devising queries to express their information needs. In this article, we describe an approach that uses unobtrusive monitoring of interaction to proactively support searchers. The approach chooses terms to better represent information needs by monitoring searcher interaction with different representations of top-ranked documents. Information needs are dynamic and can change as a searcher views information. The approach we propose gathers evidence on potential changes in these needs and uses this evidence to choose new retrieval strategies. We present an evaluation of how well our technique estimates information needs, how well it estimates changes in these needs and the appropriateness of the interface support it offers. The results are presented and the avenues for future research identified.
  19. Oduntan, O.; Ruthven, I.: People and places : bridging the information gaps in refugee integration (2021) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 66) [ClassicSimilarity], result of:
          0.016003672 = score(doc=66,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 66, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=66)
      0.33333334 = coord(1/3)
    
    Abstract
    This article discusses the sources of information used by refugees as they navigate integration systems and processes. The study used interviews to examine how refugees and asylum seekers dealt with their information needs, finding that information gaps were bridged through people and places. People included friends, solicitors, and caseworkers, whereas places included service providers, detention centers, and refugee camps. The information needs matrix was used as an analytical tool to examine the operation of sources on refuge-seekers' integration journeys. Our findings expand understandings of information sources and information grounds. The matrix can be used to enhance host societies' capacity to make appropriate information available and to provide evidence for the implementation of the information needs matrix.
  20. White, R.W.; Jose, J.M.; Ruthven, I.: ¬A task-oriented study on the influencing effects of query-biased summarisation in web searching (2003) 0.00
    0.0044454644 = product of:
      0.013336393 = sum of:
        0.013336393 = weight(_text_:on in 1081) [ClassicSimilarity], result of:
          0.013336393 = score(doc=1081,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.121501654 = fieldWeight in 1081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
      0.33333334 = coord(1/3)