Search (20 results, page 1 of 1)

  • × author_ss:"Ruthven, I."
  1. Lalmas, M.; Ruthven, I.: ¬A model for structured document retrieval : empirical investigations (1997) 0.01
    0.009170066 = product of:
      0.027510196 = sum of:
        0.027510196 = product of:
          0.08253059 = sum of:
            0.08253059 = weight(_text_:retrieval in 727) [ClassicSimilarity], result of:
              0.08253059 = score(doc=727,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.5347345 = fieldWeight in 727, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=727)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Documents often display a structure, e.g. several sections, each with several subsections and so on. Taking into account the structure of a document allows the retrieval process to focus on those parts of the document that are most relevant to an information need. In previous work, we developed a model for the representation and the retrieval of structured documents. This paper reports the first experimental study of the effectiveness and applicability of the model
    Source
    Hypertext - Information Retrieval - Multimedia '97: Theorien, Modelle und Implementierungen integrierter elektronischer Informationssysteme. Proceedings HIM '97. Hrsg.: N. Fuhr u.a
  2. Crestani, F.; Ruthven, I.; Sanderson, M.; Rijsbergen, C.J. van: ¬The troubles with using a logical model of IR on a large collection of documents : experimenting retrieval by logical imaging on TREC (1996) 0.01
    0.00810527 = product of:
      0.024315808 = sum of:
        0.024315808 = product of:
          0.07294742 = sum of:
            0.07294742 = weight(_text_:retrieval in 7522) [ClassicSimilarity], result of:
              0.07294742 = score(doc=7522,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.47264296 = fieldWeight in 7522, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7522)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  3. Ruthven, I.; Buchanan, S.; Jardine, C.: Isolated, overwhelmed, and worried : young first-time mothers asking for information and support online (2018) 0.01
    0.0069230343 = product of:
      0.020769102 = sum of:
        0.020769102 = product of:
          0.062307306 = sum of:
            0.062307306 = weight(_text_:online in 4455) [ClassicSimilarity], result of:
              0.062307306 = score(doc=4455,freq=8.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40237486 = fieldWeight in 4455, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4455)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This study investigates the emotional content of 174 posts from 162 posters to online forums made by young (age 14-21) first-time mothers to understand what emotions are expressed in these posts and how these emotions interact with the types of posts and the indicators of Information Poverty within the posts. Using textual analyses we provide a classification of emotions within posts across three main themes of interaction emotions, response emotions, and preoccupation emotions and show that many requests for information by young first-time mothers are motivated by negative emotions. This has implications for how moderators of online news groups respond to online request for help and for understanding how to support vulnerable young parents.
  4. Sanderson, M.; Ruthven, I.: Report on the Glasgow IR group (glair4) submission (1997) 0.01
    0.006877549 = product of:
      0.020632647 = sum of:
        0.020632647 = product of:
          0.06189794 = sum of:
            0.06189794 = weight(_text_:retrieval in 3088) [ClassicSimilarity], result of:
              0.06189794 = score(doc=3088,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.40105087 = fieldWeight in 3088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3088)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    The Fifth Text Retrieval Conference (TREC-5). Ed.: E.M. Voorhees u. D.K. Harman
  5. Hasler, L.; Ruthven, I.; Buchanan, S.: Using internet groups in situations of information poverty : topics and information needs (2014) 0.01
    0.005995524 = product of:
      0.017986571 = sum of:
        0.017986571 = product of:
          0.053959712 = sum of:
            0.053959712 = weight(_text_:online in 1176) [ClassicSimilarity], result of:
              0.053959712 = score(doc=1176,freq=6.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.34846687 = fieldWeight in 1176, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1176)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This study explores the use of online newsgroups and discussion groups by people in situations of information poverty. Through a qualitative content analysis of 200 posts across Internet groups, we identify topics and information needs expressed by people who feel they have no other sources of support available to them. We uncover various health, well-being, social, and identity issues that are not only crucial to the lives of the people posting but which they are unwilling to risk revealing elsewhere-offering evidence that these online environments provide an outlet for the expression of critical and hidden information needs. To enable this analysis, we first describe our method for reliably identifying situations of information poverty in messages posted to these groups and outline our coding approach. Our work contributes to the study of both information seeking within the context of information poverty and the use of Internet groups as sources of information and support, bridging the two by exploring the manifestation of information poverty in this particular online setting.
  6. Lalmas, M.; Ruthven, I.: Representing and retrieving structured documents using the Dempster-Shafer theory of evidence : modelling and evaluation (1998) 0.01
    0.005673688 = product of:
      0.017021064 = sum of:
        0.017021064 = product of:
          0.05106319 = sum of:
            0.05106319 = weight(_text_:retrieval in 1076) [ClassicSimilarity], result of:
              0.05106319 = score(doc=1076,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33085006 = fieldWeight in 1076, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1076)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on a theoretical model of structured document indexing and retrieval based on the Dempster-Schafer Theory of Evidence. Includes a description of the model of structured document retrieval, the representation of structured documents, the representation of individual components, how components are combined, details of the combination process, and how relevance is captured within the model. Also presents a detailed account of an implementation of the model, and an evaluation scheme designed to test the effectiveness of the model
  7. Borlund, P.; Ruthven, I.: Introduction to the special issue on evaluating interactive information retrieval systems (2008) 0.01
    0.0051262225 = product of:
      0.015378667 = sum of:
        0.015378667 = product of:
          0.046136 = sum of:
            0.046136 = weight(_text_:retrieval in 2019) [ClassicSimilarity], result of:
              0.046136 = score(doc=2019,freq=10.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.29892567 = fieldWeight in 2019, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2019)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focus being on how we evaluate IR algorithms. As a research field we have benefited greatly from initiatives such as Cranfield, TREC, CLEF and INEX that have added to our knowledge of how to create test collections, the reliability of system-based evaluation criteria and our understanding of how to interpret the results of an algorithmic evaluation. In contrast, evaluations whose main focus is the user experience of searching have not yet reached the same level of maturity. Such evaluations are complex to create and assess due to the increased number of variables to incorporate within the study, the lack of standard tools available (for example, test collections) and the difficulty of selecting appropriate evaluation criteria for study. In spite of the complicated nature of user-centred evaluations, this form of evaluation is necessary to understand the effectiveness of individual IR systems and user search interactions. The growing incorporation of users into the evaluation process reflects the changing nature of IR within society; for example, more and more people have access to IR systems through Internet search engines but have little training or guidance in how to use these systems effectively. Similarly, new types of search system and new interactive IR facilities are becoming available to wide groups of end-users. In this special topic issue we present papers that tackle the methodological issues of evaluating interactive search systems. Methodologies can be presented at different levels; the papers by Blandford et al. and Petrelli present whole methodological approaches for evaluating interactive systems whereas those by Göker and Myrhaug and López Ostenero et al., consider what makes an appropriate evaluation methodological approach for specific retrieval situations. Any methodology must consider the nature of the methodological components, the instruments and processes by which we evaluate our systems. A number of papers have examined these issues in detail: Käki and Aula focus on specific methodological issues for the evaluation of Web search interfaces, Lopatovska and Mokros present alternate measures of retrieval success, Tenopir et al. examine the affective and cognitive verbalisations that occur within user studies and Kelly et al. analyse questionnaires, one of the basic tools for evaluations. The range of topics in this special issue as a whole nicely illustrates the variety and complexity by which user-centred evaluation of IR systems is undertaken.
    Footnote
    Einleitung eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
  8. Ruthven, I.; Buchanan, S.; Jardine, C.: Relationships, environment, health and development : the information needs expressed online by young first-time mothers (2018) 0.00
    0.004895325 = product of:
      0.0146859735 = sum of:
        0.0146859735 = product of:
          0.04405792 = sum of:
            0.04405792 = weight(_text_:online in 4369) [ClassicSimilarity], result of:
              0.04405792 = score(doc=4369,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.284522 = fieldWeight in 4369, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4369)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This study investigates the information needs of young first time mothers through a qualitative content analysis of 266 selected posts to a major online discussion group. Our analysis reveals three main categories of need: needs around how to create a positive environment for a child, needs around a mother's relationships and well-being and needs around child development and health. We demonstrate the similarities of this scheme to needs uncovered in other studies and how our classification of needs is more comprehensive than those in previous studies. A critical distinction in our results is between two types of need presentation, distinguishing between situational and informational needs. Situational needs are narrative descriptions of a problematic situations whereas informational needs are need statements with a clear request. Distinguishing between these two types of needs sheds new light on how information needs develop. We conclude with a discussion on the implication of our results for young mothers and information providers.
  9. White, R.W.; Jose, J.M.; Ruthven, I.: ¬An implicit feedback approach for interactive information retrieval (2006) 0.00
    0.0048631616 = product of:
      0.014589485 = sum of:
        0.014589485 = product of:
          0.043768454 = sum of:
            0.043768454 = weight(_text_:retrieval in 964) [ClassicSimilarity], result of:
              0.043768454 = score(doc=964,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 964, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=964)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Searchers can face problems finding the information they seek. One reason for this is that they may have difficulty devising queries to express their information needs. In this article, we describe an approach that uses unobtrusive monitoring of interaction to proactively support searchers. The approach chooses terms to better represent information needs by monitoring searcher interaction with different representations of top-ranked documents. Information needs are dynamic and can change as a searcher views information. The approach we propose gathers evidence on potential changes in these needs and uses this evidence to choose new retrieval strategies. We present an evaluation of how well our technique estimates information needs, how well it estimates changes in these needs and the appropriateness of the interface support it offers. The results are presented and the avenues for future research identified.
  10. Baillie, M.; Azzopardi, L.; Ruthven, I.: Evaluating epistemic uncertainty under incomplete assessments (2008) 0.00
    0.0048631616 = product of:
      0.014589485 = sum of:
        0.014589485 = product of:
          0.043768454 = sum of:
            0.043768454 = weight(_text_:retrieval in 2065) [ClassicSimilarity], result of:
              0.043768454 = score(doc=2065,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.2835858 = fieldWeight in 2065, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2065)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty - the amount of knowledge (or ignorance) we have about the estimate of a system's performance - during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.
  11. White, R.W.; Jose, J.M.; Ruthven, I.: ¬A task-oriented study on the influencing effects of query-biased summarisation in web searching (2003) 0.00
    0.004052635 = product of:
      0.012157904 = sum of:
        0.012157904 = product of:
          0.03647371 = sum of:
            0.03647371 = weight(_text_:retrieval in 1081) [ClassicSimilarity], result of:
              0.03647371 = score(doc=1081,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 1081, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1081)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The aim of the work described in this paper is to evaluate the influencing effects of query-biased summaries in web searching. For this purpose, a summarisation system has been developed, and a summary tailored to the user's query is generated automatically for each document retrieved. The system aims to provide both a better means of assessing document relevance than titles or abstracts typical of many web search result lists. Through visiting each result page at retrieval-time, the system provides the user with an idea of the current page content and thus deals with the dynamic nature of the web. To examine the effectiveness of this approach, a task-oriented, comparative evaluation between four different web retrieval systems was performed; two that use query-biased summarisation, and two that use the standard ranked titles/abstracts approach. The results from the evaluation indicate that query-biased summarisation techniques appear to be more useful and effective in helping users gauge document relevance than the traditional ranked titles/abstracts approach. The same methodology was used to compare the effectiveness of two of the web's major search engines; AltaVista and Google.
  12. Ruthven, I.: Integrating approaches to relevance (2005) 0.00
    0.003970755 = product of:
      0.011912264 = sum of:
        0.011912264 = product of:
          0.03573679 = sum of:
            0.03573679 = weight(_text_:retrieval in 638) [ClassicSimilarity], result of:
              0.03573679 = score(doc=638,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23154683 = fieldWeight in 638, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=638)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Relevance is the distinguishing feature of IR research. It is the intricacy of relevance, and its basis in human decision-making, which defines and shapes our research field. Relevance as a concept cuts across the spectrum of information seeking and IR research from investigations into information seeking behaviours to theoretical models of IR. Given their mutual dependence on relevance we might predict a strong relationship between information seeking and retrieval in how they regard and discuss the role of relevance within our research programmes. However often, too often, information seeking and IR have been continued as independent research traditions: IR research ignoring the extensive, user-based frameworks developed by information seeking and information seeking underestimating the influence of IR systems and interfaces within the information seeking process. When these two disciplines come together we often find the strongest research, research that is motivated by an understanding of what cognitive processes require support during information seeking, and an understanding of how this support might be provided by an IR system. The aim of this chapter is to investigate this common ground of research, in particular to examine the central notion of relevance that underpins both information seeking and IR research. It seeks to investigate how our understanding of relevance as a process of human decision making can, and might, influence our design of interactive IR systems. It does not cover every area of IR research, or each area in the same depth; rather we try to single out the areas where the nature of relevance, and its implications, is driving the research agenda. We start by providing a brief introduction to how relevance has been treated so far in the literature and then consider the key areas where issues of relevance are of current concern. Specifically the chapter discusses the difficulties of making and interpreting relevance assessments, the role and meaning of differentiated relevance assessments, the specific role of time within information seeking, and the large, complex issue of relevance within evaluations of IR systems. In each area we try to establish where the two fields of IR and information seeking are establishing fruitful collaborations, where there is a gap for prospective collaboration and the possible difficulties in establishing mutual aims.
    Series
    The information retrieval series, vol. 19
    Source
    New directions in cognitive information retrieval. Eds.: A. Spink, C. Cole
  13. Belabbes, M.A.; Ruthven, I.; Moshfeghi, Y.; Rasmussen Pennington, D.: Information overload : a concept analysis (2023) 0.00
    0.0038404856 = product of:
      0.011521457 = sum of:
        0.011521457 = product of:
          0.03456437 = sum of:
            0.03456437 = weight(_text_:22 in 950) [ClassicSimilarity], result of:
              0.03456437 = score(doc=950,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.19345059 = fieldWeight in 950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=950)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2023 19:27:56
  14. Tombros, A.; Ruthven, I.; Jose, J.M.: How users assess Web pages for information seeking (2005) 0.00
    0.0034615172 = product of:
      0.010384551 = sum of:
        0.010384551 = product of:
          0.031153653 = sum of:
            0.031153653 = weight(_text_:online in 5255) [ClassicSimilarity], result of:
              0.031153653 = score(doc=5255,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.20118743 = fieldWeight in 5255, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5255)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we investigate the criteria used by online searchers when assessing the relevance of Web pages for information-seeking tasks. Twenty-four participants were given three tasks each, and they indicated the Features of Web pages that they used when deciding about the usefulness of the pages in relation to the tasks. These tasks were presented within the context of a simulated work-task situation. We investigated the relative utility of features identified by participants (Web page content, structure, and quality) and how the importance of these features is affected by the type of information-seeking task performed and the stage of the search. The results of this study provide a set of criteria used by searchers to decide about the utility of Web pages for different types of tasks. Such criteria can have implications for the design of systems that use or recommend Web pages.
  15. Ruthven, I.; Lalmas, M.; Rijsbergen, K. van: Combining and selecting characteristics of information use (2002) 0.00
    0.0032421078 = product of:
      0.009726323 = sum of:
        0.009726323 = product of:
          0.029178968 = sum of:
            0.029178968 = weight(_text_:retrieval in 5208) [ClassicSimilarity], result of:
              0.029178968 = score(doc=5208,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.18905719 = fieldWeight in 5208, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5208)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Ruthven, Lalmas, and van Rijsbergen use traditional term importance measures like inverse document frequency, noise, based upon in-document frequency, and term frequency supplemented by theme value which is calculated from differences of expected positions of words in a text from their actual positions, on the assumption that even distribution indicates term association with a main topic, and context, which is based on a query term's distance from the nearest other query term relative to the average expected distribution of all query terms in the document. They then define document characteristics like specificity, the sum of all idf values in a document over the total terms in the document, or document complexity, measured by the documents average idf value; and information to noise ratio, info-noise, tokens after stopping and stemming over tokens before these processes, measuring the ratio of useful and non-useful information in a document. Retrieval tests are then carried out using each characteristic, combinations of the characteristics, and relevance feedback to determine the correct combination of characteristics. A file ranks independently of query terms by both specificity and info-noise, but if presence of a query term is required unique rankings are generated. Tested on five standard collections the traditional characteristics out preformed the new characteristics, which did, however, out preform random retrieval. All possible combinations of characteristics were also tested both with and without a set of scaling weights applied. All characteristics can benefit by combination with another characteristic or set of characteristics and performance as a single characteristic is a good indicator of performance in combination. Larger combinations tended to be more effective than smaller ones and weighting increased precision measures of middle ranking combinations but decreased the ranking of poorer combinations. The best combinations vary for each collection, and in some collections with the addition of weighting. Finally, with all documents ranked by the all characteristics combination, they take the top 30 documents and calculate the characteristic scores for each term in both the relevant and the non-relevant sets. Then taking for each query term the characteristics whose average was higher for relevant than non-relevant documents the documents are re-ranked. The relevance feedback method of selecting characteristics can select a good set of characteristics for query terms.
  16. Ruthven, I.: ¬The language of information need : differentiating conscious and formalized information needs (2019) 0.00
    0.0028845975 = product of:
      0.008653793 = sum of:
        0.008653793 = product of:
          0.025961377 = sum of:
            0.025961377 = weight(_text_:online in 5035) [ClassicSimilarity], result of:
              0.025961377 = score(doc=5035,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 5035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5035)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Information need is a fundamental concept within Information Science. Robert Taylor's seminal contribution in 1968 was to propose a division of information needs into four levels: the visceral, conscious, formalized and compromised levels of information need. Taylor's contribution has provided much inspiration to Information Science research but this has largely remained at the discursive and conceptual level. In this paper, we present a novel empirical investigation of Taylor's information need classification. We analyse the linguistic differences between conscious and formalized needs using several hundred postings to four major Internet discussion groups. We show that descriptions of conscious needs are more emotional in tone, involve more sensory perception and contain different temporal dimensions than descriptions of formalized needs. We show that it is possible to differentiate levels of information need based on linguistic patterns and that the language used to express information needs can reflect an individual's understanding of their information problem. This has implications for the theory of information needs and practical implications for supporting moderators of online news groups in responding to information needs and for developing automated support for classifying information needs.
  17. White, R.W.; Jose, J.M.; Ruthven, I.: Using top-ranking sentences to facilitate effective information access (2005) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 3881) [ClassicSimilarity], result of:
              0.025790809 = score(doc=3881,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 3881, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3881)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Web searchers typically fall to view search results beyond the first page nor fully examine those results presented to them. In this article we describe an approach that encourages a deeper examination of the contents of the document set retrieved in response to a searcher's query. The approach shifts the focus of perusal and interaction away from potentially uninformative document surrogates (such as titles, sentence fragments, and URLs) to actual document content, and uses this content to drive the information seeking process. Current search interfaces assume searchers examine results document-by-document. In contrast our approach extracts, ranks, and presents the contents of the top-ranked document set. We use query-relevant topranking sentences extracted from the top documents at retrieval time as fine-grained representations of topranked document content and, when combined in a ranked list, an overview of these documents. The interaction of the searcher provides implicit evidence that is used to reorder the sentences where appropriate. We evaluate our approach in three separate user studies, each applying these sentences in a different way. The findings of these studies show that top-ranking sentences can facilitate effective information access.
  18. White, R.W.; Ruthven, I.: ¬A study of interface support mechanisms for interactive information retrieval (2006) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 5064) [ClassicSimilarity], result of:
              0.025790809 = score(doc=5064,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 5064, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5064)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
  19. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 835) [ClassicSimilarity], result of:
              0.025790809 = score(doc=835,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=835)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  20. Balatsoukas, P.; Ruthven, I.: ¬An eye-tracking approach to the analysis of relevance judgments on the Web : the case of Google search engine (2012) 0.00
    0.0028656456 = product of:
      0.008596936 = sum of:
        0.008596936 = product of:
          0.025790809 = sum of:
            0.025790809 = weight(_text_:retrieval in 379) [ClassicSimilarity], result of:
              0.025790809 = score(doc=379,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16710453 = fieldWeight in 379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=379)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Eye movement data can provide an in-depth view of human reasoning and the decision-making process, and modern information retrieval (IR) research can benefit from the analysis of this type of data. The aim of this research was to examine the relationship between relevance criteria use and visual behavior in the context of predictive relevance judgments. To address this objective, a multimethod research design was employed that involved observation of participants' eye movements, talk-aloud protocols, and postsearch interviews. Specifically, the results reported in this article came from the analysis of 281 predictive relevance judgments made by 24 participants using the Google search engine. We present a novel stepwise methodological framework for the analysis of relevance judgments and eye movements on the Web and show new patterns of relevance criteria use during predictive relevance judgment. For example, the findings showed an effect of ranking order and surrogate components (Title, Summary, and URL) on the use of relevance criteria. Also, differences were observed in the cognitive effort spent between very relevant and not relevant judgments. We conclude with the implications of this study for IR research.