Search (23 results, page 1 of 2)

  • × author_ss:"Ruthven, I."
  1. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.02
    0.021697324 = product of:
      0.05062709 = sum of:
        0.0153751075 = product of:
          0.030750215 = sum of:
            0.030750215 = weight(_text_:science in 950) [ClassicSimilarity], result of:
              0.030750215 = score(doc=950,freq=8.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2910318 = fieldWeight in 950, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
        0.02166549 = weight(_text_:library in 950) [ClassicSimilarity], result of:
          0.02166549 = score(doc=950,freq=4.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.2054202 = fieldWeight in 950, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.0390625 = fieldNorm(doc=950)
        0.013586491 = product of:
          0.027172983 = sum of:
            0.027172983 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.027172983 = score(doc=950,freq=2.0), product of:
                0.14046472 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04011181 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    Purpose With the shift to an information-based society and to the de-centralisation of information, information overload has attracted a growing interest in the computer and information science research communities. However, there is no clear understanding of the meaning of the term, and while there have been many proposed definitions, there is no consensus. The goal of this work was to define the concept of "information overload". In order to do so, a concept analysis using Rodgers' approach was performed. Design/methodology/approach A concept analysis using Rodgers' approach based on a corpus of documents published between 2010 and September 2020 was conducted. One surrogate for "information overload", which is "cognitive overload" was identified. The corpus of documents consisted of 151 documents for information overload and ten for cognitive overload. All documents were from the fields of computer science and information science, and were retrieved from three databases: Association for Computing Machinery (ACM) Digital Library, SCOPUS and Library and Information Science Abstracts (LISA). Findings The themes identified from the authors' concept analysis allowed us to extract the triggers, manifestations and consequences of information overload. They found triggers related to information characteristics, information need, the working environment, the cognitive abilities of individuals and the information environment. In terms of manifestations, they found that information overload manifests itself both emotionally and cognitively. The consequences of information overload were both internal and external. These findings allowed them to provide a definition of information overload. Originality/value Through the authors' concept analysis, they were able to clarify the components of information overload and provide a definition of the concept.
    Date
    22. 4.2023 19:27:56
  2. Ruthven, I.; Baillie, M.; Azzopardi, L.; Bierig, R.; Nicol, E.; Sweeney, S.; Yaciki, M.: Contextual factors affecting the utility of surrogates within exploratory search (2008) 0.01
    0.013855011 = product of:
      0.048492536 = sum of:
        0.02929879 = weight(_text_:systems in 2042) [ClassicSimilarity], result of:
          0.02929879 = score(doc=2042,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.23767869 = fieldWeight in 2042, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2042)
        0.019193748 = product of:
          0.038387496 = sum of:
            0.038387496 = weight(_text_:29 in 2042) [ClassicSimilarity], result of:
              0.038387496 = score(doc=2042,freq=2.0), product of:
                0.14110081 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04011181 = queryNorm
                0.27205724 = fieldWeight in 2042, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2042)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    29. 7.2008 12:28:27
    Footnote
    Beitrag eines Themenschwerpunktes "Evaluating exploratory search systems"
  3. Ruthven, I.; Lalmas, M.: Selective relevance feedback using term characteristics (1999) 0.01
    0.013147068 = product of:
      0.046014737 = sum of:
        0.0153751075 = product of:
          0.030750215 = sum of:
            0.030750215 = weight(_text_:science in 3824) [ClassicSimilarity], result of:
              0.030750215 = score(doc=3824,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2910318 = fieldWeight in 3824, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3824)
          0.5 = coord(1/2)
        0.03063963 = weight(_text_:library in 3824) [ClassicSimilarity], result of:
          0.03063963 = score(doc=3824,freq=2.0), product of:
            0.10546913 = queryWeight, product of:
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.04011181 = queryNorm
            0.29050803 = fieldWeight in 3824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6293786 = idf(docFreq=8668, maxDocs=44218)
              0.078125 = fieldNorm(doc=3824)
      0.2857143 = coord(2/7)
    
    Source
    Vocabulary as a central concept in digital libraries: interdisciplinary concepts, challenges, and opportunities : proceedings of the Third International Conference an Conceptions of Library and Information Science (COLIS3), Dubrovnik, Croatia, 23-26 May 1999. Ed. by T. Arpanac et al
  4. White, R.W.; Ruthven, I.: ¬A study of interface support mechanisms for interactive information retrieval (2006) 0.01
    0.012552974 = product of:
      0.043935407 = sum of:
        0.036247853 = weight(_text_:systems in 5064) [ClassicSimilarity], result of:
          0.036247853 = score(doc=5064,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.29405114 = fieldWeight in 5064, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5064)
        0.0076875538 = product of:
          0.0153751075 = sum of:
            0.0153751075 = weight(_text_:science in 5064) [ClassicSimilarity], result of:
              0.0153751075 = score(doc=5064,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1455159 = fieldWeight in 5064, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5064)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Advances in search technology have meant that search systems can now offer assistance to users beyond simply retrieving a set of documents. For example, search systems are now capable of inferring user interests by observing their interaction, offering suggestions about what terms could be used in a query, or reorganizing search results to make exploration of retrieved material more effective. When providing new search functionality, system designers must decide how the new functionality should be offered to users. One major choice is between (a) offering automatic features that require little human input, but give little human control; or (b) interactive features which allow human control over how the feature is used, but often give little guidance over how the feature should be best used. This article presents a study in which we empirically investigate the issue of control by presenting an experiment in which participants were asked to interact with three experimental systems that vary the degree of control they had in creating queries, indicating which results are relevant in making search decisions. We use our findings to discuss why and how the control users want over search decisions can vary depending on the nature of the decisions and the impact of those decisions on the user's search.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.7, S.933-948
  5. Tombros, A.; Ruthven, I.; Jose, J.M.: How users assess Web pages for information seeking (2005) 0.01
    0.009810947 = product of:
      0.034338314 = sum of:
        0.02511325 = weight(_text_:systems in 5255) [ClassicSimilarity], result of:
          0.02511325 = score(doc=5255,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.2037246 = fieldWeight in 5255, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=5255)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 5255) [ClassicSimilarity], result of:
              0.018450128 = score(doc=5255,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 5255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5255)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    In this article, we investigate the criteria used by online searchers when assessing the relevance of Web pages for information-seeking tasks. Twenty-four participants were given three tasks each, and they indicated the Features of Web pages that they used when deciding about the usefulness of the pages in relation to the tasks. These tasks were presented within the context of a simulated work-task situation. We investigated the relative utility of features identified by participants (Web page content, structure, and quality) and how the importance of these features is affected by the type of information-seeking task performed and the stage of the search. The results of this study provide a set of criteria used by searchers to decide about the utility of Web pages for different types of tasks. Such criteria can have implications for the design of systems that use or recommend Web pages.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.4, S.327-344
  6. Oduntan, O.; Ruthven, I.: People and places : bridging the information gaps in refugee integration (2021) 0.01
    0.009810947 = product of:
      0.034338314 = sum of:
        0.02511325 = weight(_text_:systems in 66) [ClassicSimilarity], result of:
          0.02511325 = score(doc=66,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.2037246 = fieldWeight in 66, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=66)
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 66) [ClassicSimilarity], result of:
              0.018450128 = score(doc=66,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 66, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=66)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This article discusses the sources of information used by refugees as they navigate integration systems and processes. The study used interviews to examine how refugees and asylum seekers dealt with their information needs, finding that information gaps were bridged through people and places. People included friends, solicitors, and caseworkers, whereas places included service providers, detention centers, and refugee camps. The information needs matrix was used as an analytical tool to examine the operation of sources on refuge-seekers' integration journeys. Our findings expand understandings of information sources and information grounds. The matrix can be used to enhance host societies' capacity to make appropriate information available and to provide evidence for the implementation of the information needs matrix.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.1, S.83-96
  7. Ruthven, I.; Buchanan, S.; Jardine, C.: Relationships, environment, health and development : the information needs expressed online by young first-time mothers (2018) 0.01
    0.007336243 = product of:
      0.0513537 = sum of:
        0.0513537 = sum of:
          0.018450128 = weight(_text_:science in 4369) [ClassicSimilarity], result of:
            0.018450128 = score(doc=4369,freq=2.0), product of:
              0.10565929 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.04011181 = queryNorm
              0.17461908 = fieldWeight in 4369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=4369)
          0.03290357 = weight(_text_:29 in 4369) [ClassicSimilarity], result of:
            0.03290357 = score(doc=4369,freq=2.0), product of:
              0.14110081 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.04011181 = queryNorm
              0.23319192 = fieldWeight in 4369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=4369)
      0.14285715 = coord(1/7)
    
    Date
    29. 7.2018 9:47:05
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.8, S.985-995
  8. Ruthven, I.; Buchanan, S.; Jardine, C.: Isolated, overwhelmed, and worried : young first-time mothers asking for information and support online (2018) 0.01
    0.007336243 = product of:
      0.0513537 = sum of:
        0.0513537 = sum of:
          0.018450128 = weight(_text_:science in 4455) [ClassicSimilarity], result of:
            0.018450128 = score(doc=4455,freq=2.0), product of:
              0.10565929 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.04011181 = queryNorm
              0.17461908 = fieldWeight in 4455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=4455)
          0.03290357 = weight(_text_:29 in 4455) [ClassicSimilarity], result of:
            0.03290357 = score(doc=4455,freq=2.0), product of:
              0.14110081 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.04011181 = queryNorm
              0.23319192 = fieldWeight in 4455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=4455)
      0.14285715 = coord(1/7)
    
    Date
    29. 9.2018 11:25:14
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.9, S.1073-1083
  9. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.01
    0.0071752146 = product of:
      0.0502265 = sum of:
        0.0502265 = weight(_text_:systems in 2019) [ClassicSimilarity], result of:
          0.0502265 = score(doc=2019,freq=18.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.4074492 = fieldWeight in 2019, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
      0.14285715 = coord(1/7)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
    Footnote
    Einleitung eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
  10. Baillie, M.; Azzopardi, L.; Ruthven, I.: Evaluating epistemic uncertainty under incomplete assessments (2008) 0.01
    0.0050736424 = product of:
      0.035515495 = sum of:
        0.035515495 = weight(_text_:systems in 2065) [ClassicSimilarity], result of:
          0.035515495 = score(doc=2065,freq=4.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.28811008 = fieldWeight in 2065, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=2065)
      0.14285715 = coord(1/7)
    
    Abstract
    The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty - the amount of knowledge (or ignorance) we have about the estimate of a system's performance - during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.
  11. Ruthven, I.: Integrating approaches to relevance (2005) 0.00
    0.0041426118 = product of:
      0.028998282 = sum of:
        0.028998282 = weight(_text_:systems in 638) [ClassicSimilarity], result of:
          0.028998282 = score(doc=638,freq=6.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.2352409 = fieldWeight in 638, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=638)
      0.14285715 = coord(1/7)
    
    Abstract
    Relevance is the distinguishing feature of IR research. It is the intricacy of relevance, and its basis in human decision-making, which defines and shapes our research field. Relevance as a concept cuts across the spectrum of information seeking and IR research from investigations into information seeking behaviours to theoretical models of IR. Given their mutual dependence on relevance we might predict a strong relationship between information seeking and retrieval in how they regard and discuss the role of relevance within our research programmes. However often, too often, information seeking and IR have been continued as independent research traditions: IR research ignoring the extensive, user-based frameworks developed by information seeking and information seeking underestimating the influence of IR systems and interfaces within the information seeking process. When these two disciplines come together we often find the strongest research, research that is motivated by an understanding of what cognitive processes require support during information seeking, and an understanding of how this support might be provided by an IR system. The aim of this chapter is to investigate this common ground of research, in particular to examine the central notion of relevance that underpins both information seeking and IR research. It seeks to investigate how our understanding of relevance as a process of human decision making can, and might, influence our design of interactive IR systems. It does not cover every area of IR research, or each area in the same depth; rather we try to single out the areas where the nature of relevance, and its implications, is driving the research agenda. We start by providing a brief introduction to how relevance has been treated so far in the literature and then consider the key areas where issues of relevance are of current concern. Specifically the chapter discusses the difficulties of making and interpreting relevance assessments, the role and meaning of differentiated relevance assessments, the specific role of time within information seeking, and the large, complex issue of relevance within evaluations of IR systems. In each area we try to establish where the two fields of IR and information seeking are establishing fruitful collaborations, where there is a gap for prospective collaboration and the possible difficulties in establishing mutual aims.
  12. White, R.W.; Jose, J.M.; Ruthven, I.: ¬A task-oriented study on the influencing effects of query-biased summarisation in web searching (2003) 0.00
    0.0029896726 = product of:
      0.020927707 = sum of:
        0.020927707 = weight(_text_:systems in 1081) [ClassicSimilarity], result of:
          0.020927707 = score(doc=1081,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.1697705 = fieldWeight in 1081, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1081)
      0.14285715 = coord(1/7)
    
    Abstract
    The aim of the work described in this paper is to evaluate the influencing effects of query-biased summaries in web searching. For this purpose, a summarisation system has been developed, and a summary tailored to the user's query is generated automatically for each document retrieved. The system aims to provide both a better means of assessing document relevance than titles or abstracts typical of many web search result lists. Through visiting each result page at retrieval-time, the system provides the user with an idea of the current page content and thus deals with the dynamic nature of the web. To examine the effectiveness of this approach, a task-oriented, comparative evaluation between four different web retrieval systems was performed; two that use query-biased summarisation, and two that use the standard ranked titles/abstracts approach. The results from the evaluation indicate that query-biased summarisation techniques appear to be more useful and effective in helping users gauge document relevance than the traditional ranked titles/abstracts approach. The same methodology was used to compare the effectiveness of two of the web's major search engines; AltaVista and Google.
  13. Ruthven, I.: Relevance behaviour in TREC (2014) 0.00
    0.0029896726 = product of:
      0.020927707 = sum of:
        0.020927707 = weight(_text_:systems in 1785) [ClassicSimilarity], result of:
          0.020927707 = score(doc=1785,freq=2.0), product of:
            0.12327058 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04011181 = queryNorm
            0.1697705 = fieldWeight in 1785, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1785)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - The purpose of this paper is to examine how various types of TREC data can be used to better understand relevance and serve as test-bed for exploring relevance. The author proposes that there are many interesting studies that can be performed on the TREC data collections that are not directly related to evaluating systems but to learning more about human judgements of information and relevance and that these studies can provide useful research questions for other types of investigation. Design/methodology/approach - Through several case studies the author shows how existing data from TREC can be used to learn more about the factors that may affect relevance judgements and interactive search decisions and answer new research questions for exploring relevance. Findings - The paper uncovers factors, such as familiarity, interest and strictness of relevance criteria, that affect the nature of relevance assessments within TREC, contrasting these against findings from user studies of relevance. Research limitations/implications - The research only considers certain uses of TREC data and assessment given by professional relevance assessors but motivates further exploration of the TREC data so that the research community can further exploit the effort involved in the construction of TREC test collections. Originality/value - The paper presents an original viewpoint on relevance investigations and TREC itself by motivating TREC as a source of inspiration on understanding relevance rather than purely as a source of evaluation material.
  14. Ruthven, I.: ¬An information behavior theory of transitions (2022) 0.00
    0.0021743686 = product of:
      0.01522058 = sum of:
        0.01522058 = product of:
          0.03044116 = sum of:
            0.03044116 = weight(_text_:science in 530) [ClassicSimilarity], result of:
              0.03044116 = score(doc=530,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.2881068 = fieldWeight in 530, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=530)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper proposes a theory of life transitions focused on information behavior. Through a process of meta-ethnography, the paper transforms a series of influential theories and models into a theory of transitions for use in Information Science. This paper characterizes the psychological processes involved in transitions as consisting of three main stages, Understanding, Negotiating, and Resolving, each of which have qualitatively different information behaviors and which require different types of information support. The paper discusses the theoretical implications of this theory and proposes ways in which the theory can be used to provide practical support for those undergoing transitions.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.4, S.579-593
  15. Ruthven, I.: ¬The language of information need : differentiating conscious and formalized information needs (2019) 0.00
    0.0015531204 = product of:
      0.0108718425 = sum of:
        0.0108718425 = product of:
          0.021743685 = sum of:
            0.021743685 = weight(_text_:science in 5035) [ClassicSimilarity], result of:
              0.021743685 = score(doc=5035,freq=4.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20579056 = fieldWeight in 5035, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5035)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Information need is a fundamental concept within Information Science. Robert Taylor's seminal contribution in 1968 was to propose a division of information needs into four levels: the visceral, conscious, formalized and compromised levels of information need. Taylor's contribution has provided much inspiration to Information Science research but this has largely remained at the discursive and conceptual level. In this paper, we present a novel empirical investigation of Taylor's information need classification. We analyse the linguistic differences between conscious and formalized needs using several hundred postings to four major Internet discussion groups. We show that descriptions of conscious needs are more emotional in tone, involve more sensory perception and contain different temporal dimensions than descriptions of formalized needs. We show that it is possible to differentiate levels of information need based on linguistic patterns and that the language used to express information needs can reflect an individual's understanding of their information problem. This has implications for the theory of information needs and practical implications for supporting moderators of online news groups in responding to information needs and for developing automated support for classifying information needs.
  16. Tinto, F.; Ruthven, I.: Sharing "happy" information (2016) 0.00
    0.0015375108 = product of:
      0.010762575 = sum of:
        0.010762575 = product of:
          0.02152515 = sum of:
            0.02152515 = weight(_text_:science in 3104) [ClassicSimilarity], result of:
              0.02152515 = score(doc=3104,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20372227 = fieldWeight in 3104, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3104)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2329-2343
  17. Ruthven, I.: Resonance and the experience of relevance (2021) 0.00
    0.0015375108 = product of:
      0.010762575 = sum of:
        0.010762575 = product of:
          0.02152515 = sum of:
            0.02152515 = weight(_text_:science in 211) [ClassicSimilarity], result of:
              0.02152515 = score(doc=211,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.20372227 = fieldWeight in 211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=211)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.5, S.554-569
  18. Elsweiler, D.; Ruthven, I.; Jones, C.: Towards memory supporting personal information management tools (2007) 0.00
    0.0013178664 = product of:
      0.009225064 = sum of:
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 5057) [ClassicSimilarity], result of:
              0.018450128 = score(doc=5057,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 5057, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5057)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.7, S.924-946
  19. Hasler, L.; Ruthven, I.; Buchanan, S.: Using internet groups in situations of information poverty : topics and information needs (2014) 0.00
    0.0013178664 = product of:
      0.009225064 = sum of:
        0.009225064 = product of:
          0.018450128 = sum of:
            0.018450128 = weight(_text_:science in 1176) [ClassicSimilarity], result of:
              0.018450128 = score(doc=1176,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.17461908 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.1, S.25-36
  20. White, R.W.; Jose, J.M.; Ruthven, I.: Using top-ranking sentences to facilitate effective information access (2005) 0.00
    0.001098222 = product of:
      0.0076875538 = sum of:
        0.0076875538 = product of:
          0.0153751075 = sum of:
            0.0153751075 = weight(_text_:science in 3881) [ClassicSimilarity], result of:
              0.0153751075 = score(doc=3881,freq=2.0), product of:
                0.10565929 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.04011181 = queryNorm
                0.1455159 = fieldWeight in 3881, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3881)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.10, S.1113-1125