Search (26 results, page 2 of 2)

  • × author_ss:"Ruthven, I."
  • × type_ss:"a"
  1. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.00
    0.0014573209 = product of:
      0.008743925 = sum of:
        0.008743925 = weight(_text_:in in 2019) [ClassicSimilarity], result of:
          0.008743925 = score(doc=2019,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14725187 = fieldWeight in 2019, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2019)
      0.16666667 = coord(1/6)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
  2. Tombros, A.; Ruthven, I.; Jose, J.M.: How users assess Web pages for information seeking (2005) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 5255) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=5255,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 5255, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5255)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article, we investigate the criteria used by online searchers when assessing the relevance of Web pages for information-seeking tasks. Twenty-four participants were given three tasks each, and they indicated the Features of Web pages that they used when deciding about the usefulness of the pages in relation to the tasks. These tasks were presented within the context of a simulated work-task situation. We investigated the relative utility of features identified by participants (Web page content, structure, and quality) and how the importance of these features is affected by the type of information-seeking task performed and the stage of the search. The results of this study provide a set of criteria used by searchers to decide about the utility of Web pages for different types of tasks. Such criteria can have implications for the design of systems that use or recommend Web pages.
  3. Baillie, M.; Azzopardi, L.; Ruthven, I.: Evaluating epistemic uncertainty under incomplete assessments (2008) 0.00
    0.0012620769 = product of:
      0.0075724614 = sum of:
        0.0075724614 = weight(_text_:in in 2065) [ClassicSimilarity], result of:
          0.0075724614 = score(doc=2065,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.12752387 = fieldWeight in 2065, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2065)
      0.16666667 = coord(1/6)
    
    Abstract
    The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty - the amount of knowledge (or ignorance) we have about the estimate of a system's performance - during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.
  4. Tinto, F.; Ruthven, I.: Sharing "happy" information (2016) 0.00
    0.0010411602 = product of:
      0.006246961 = sum of:
        0.006246961 = weight(_text_:in in 3104) [ClassicSimilarity], result of:
          0.006246961 = score(doc=3104,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 3104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3104)
      0.16666667 = coord(1/6)
    
    Abstract
    This study focuses on the sharing of "happy" information: information that creates a sense of happiness within the individual sharing the information. We explore the range of factors motivating and impacting individuals' happy information-sharing behavior within a casual leisure context through 30 semistructured interviews. The findings reveal that the factors influencing individuals' happy information-sharing behavior are numerous, and impact each other. Most individuals considered sharing happy information important to their friendships and relationships. In various contexts the act of sharing happy information was shown to enhance the sharer's happiness.
  5. Ruthven, I.; Buchanan, S.; Jardine, C.: Isolated, overwhelmed, and worried : young first-time mothers asking for information and support online (2018) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 4455) [ClassicSimilarity], result of:
          0.005354538 = score(doc=4455,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 4455, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4455)
      0.16666667 = coord(1/6)
    
    Abstract
    This study investigates the emotional content of 174 posts from 162 posters to online forums made by young (age 14-21) first-time mothers to understand what emotions are expressed in these posts and how these emotions interact with the types of posts and the indicators of Information Poverty within the posts. Using textual analyses we provide a classification of emotions within posts across three main themes of interaction emotions, response emotions, and preoccupation emotions and show that many requests for information by young first-time mothers are motivated by negative emotions. This has implications for how moderators of online news groups respond to online request for help and for understanding how to support vulnerable young parents.
  6. Oduntan, O.; Ruthven, I.: People and places : bridging the information gaps in refugee integration (2021) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 66) [ClassicSimilarity], result of:
          0.005354538 = score(doc=66,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 66, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=66)
      0.16666667 = coord(1/6)