Search (49 results, page 2 of 3)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × type_ss:"a"
  1. Allan, J.; Callan, J.P.; Croft, W.B.; Ballesteros, L.; Broglio, J.; Xu, J.; Shu, H.: INQUERY at TREC-5 (1997) 0.01
    0.007845511 = product of:
      0.031382043 = sum of:
        0.031382043 = product of:
          0.062764086 = sum of:
            0.062764086 = weight(_text_:22 in 3103) [ClassicSimilarity], result of:
              0.062764086 = score(doc=3103,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.38690117 = fieldWeight in 3103, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3103)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 2.1999 20:55:22
  2. Ng, K.B.; Loewenstern, D.; Basu, C.; Hirsh, H.; Kantor, P.B.: Data fusion of machine-learning methods for the TREC5 routing tak (and other work) (1997) 0.01
    0.007845511 = product of:
      0.031382043 = sum of:
        0.031382043 = product of:
          0.062764086 = sum of:
            0.062764086 = weight(_text_:22 in 3107) [ClassicSimilarity], result of:
              0.062764086 = score(doc=3107,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.38690117 = fieldWeight in 3107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3107)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    27. 2.1999 20:59:22
  3. Saracevic, T.: On a method for studying the structure and nature of requests in information retrieval (1983) 0.01
    0.007845511 = product of:
      0.031382043 = sum of:
        0.031382043 = product of:
          0.062764086 = sum of:
            0.062764086 = weight(_text_:22 in 2417) [ClassicSimilarity], result of:
              0.062764086 = score(doc=2417,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.38690117 = fieldWeight in 2417, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.22-25
  4. Khan, K.; Locatis, C.: Searching through cyberspace : the effects of link display and link density on information retrieval from hypertext on the World Wide Web (1998) 0.01
    0.007842129 = product of:
      0.031368516 = sum of:
        0.031368516 = product of:
          0.06273703 = sum of:
            0.06273703 = weight(_text_:aspects in 446) [ClassicSimilarity], result of:
              0.06273703 = score(doc=446,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.29962775 = fieldWeight in 446, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=446)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This study investigated information retrieval from hypertext on the WWW. Significant main and interaction effects were found for both link density (number of links per display) and display format (in paragraphs or lists) on search performance. Low link densities displayed in list format produced the best overall results, in terms of search accuracy, search time, number of links explored, and search task prioritization. Lower densities affected user ability to prioritize search tasks and produced more accurate searches, while list displays positively affected all aspects of searching except task prioritization. The performance of novices and experts, in terms of their previous experience browsing hypertext on the WWW, was compared. Experts performed better, mostly because of their superior task prioritization
  5. Robins, D.: Shifts of focus on various aspects of user information problems during interactive information retrieval (2000) 0.01
    0.007842129 = product of:
      0.031368516 = sum of:
        0.031368516 = product of:
          0.06273703 = sum of:
            0.06273703 = weight(_text_:aspects in 4995) [ClassicSimilarity], result of:
              0.06273703 = score(doc=4995,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.29962775 = fieldWeight in 4995, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4995)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  6. Ellis, D.: ¬The dilemma of measurement in information retrieval research (1996) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 3003) [ClassicSimilarity], result of:
              0.052280862 = score(doc=3003,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 3003, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3003)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The problem of measurement in information retrieval research is traced to its source in the first retrieval tests. The problem is seen as presenting a chronic dilemma for the field. This dilemma has taken 3 forms as the discipline has evloved: (1) the dilemma of measurement in the archetypal approach: stated relevance versus user relevance; (2) the dilemma of measurement in the probabilistic approach: realism versus formalism; and (3) the dilemma of measurement in the Information Retrieval-Expert System (IR-ES) approach: linear measures of relevance versus logarithmic measures of knowledge. It is argued that the dilemma of measurement has remained intractable even given the different assumptions of the different approaches for 3 connecte reasons - the nature of the subject matter of the field; the nature of relevance jidgement; and the nature of cognition and knowledge. Finally, it is concluded that the original vision of information retrieval research as a discipline founded on quantification proved restricting for its theoretical and methodological development and that increasing recognition of this is reflected in growing interest in qualitative methods in information retrieval research in relation to cognitive, behavioral, and affective aspects of the information retrieval interaction
  7. Gluck, M.: Exploring the relationship between user satisfaction and relevance in information systems (1996) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 4082) [ClassicSimilarity], result of:
              0.052280862 = score(doc=4082,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 4082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4082)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Aims to better understand the relationship between relevance and user satisfaction, the 2 predominant aspects of user based performance in information systems. Unconfounds relevance and user satisfaction assessments of system performance at the retrieved item level. To minimize the idiosyncrasies of any one system, a generalized, naturalistic information system was employed in this study. Respondents completed sensemaking timeline questionnaires in which they described a recent need they had for geographic information. Retrieved documents from the generalized system consisted of the responses users obtained while resolving their information needs. Respondents directly provided process, product, cost benefit, and overall satisfaction assessments with the generalized geographic systems. Relevance judgements of retrieved items were obtained through content analysis from sensemaking questionnaires as a secondary observation technique. The content analysis provided relevance values on both 5 category and 2 category scales. Results indicate that relevance has strong relationships with process, product and overall user satisfaction measures while relevance and cost benefit satisfaction measures have no significant relationship. This analysis also indicates that neither relevance nor user satisfaction subsumes the other concept, and that understanding the proper units of analysis for these measures helps resolve the paradox of the management information system and information science literature not informing aech other concerning user based information system performance measures
  8. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 4393) [ClassicSimilarity], result of:
              0.052280862 = score(doc=4393,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 4393, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4393)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  9. Greisdorf, H.; O'Connor, B.: Nodes of topicality modeling user notions of on topic documents (2003) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 5175) [ClassicSimilarity], result of:
              0.052280862 = score(doc=5175,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 5175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5175)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Griesdorf and O'Connor attempt to determine the aspects of a retrieved item that provide a questioner with evidence that the item is in fact on the topic searched independent of its relevance. To this end they collect data from 32 participants, 11 from the business community as well as 21 doctoral students at the University of North Texas each of whom were asked to state if they considered material that approaches a topic in each of 14 specific manners as " on topic" or "off topic." Chi-square indicates that the observed values are significantly different from expected values and the chi-square residuals for on topic judgements exceed plus or minus two in eight cases and plus two in five cases. The positive values which indicate a percentage of response greater than that from chance suggest that documents considered topical are only related to the problem at hand, contain terms that were in the query, and describe, explain or expand the topic of the query. The chi-square residuals for off topic judgements exceed plus or minus two in ten cases and plus two in four cases. The positive values suggest that documents considered not topical exhibit a contrasting, contrary, or confounding point of view, or merely spark curiosity. Such material might well be relevant, but is not judged topical. This suggests that topical appropriateness may best be achieved using the Bruza, et alia, left compositional monotonicity approach.
  10. Ruthven, I.; Baillie, M.; Elsweiler, D.: ¬The relative effects of knowledge, interest and confidence in assessing relevance (2007) 0.01
    0.0065351077 = product of:
      0.026140431 = sum of:
        0.026140431 = product of:
          0.052280862 = sum of:
            0.052280862 = weight(_text_:aspects in 835) [ClassicSimilarity], result of:
              0.052280862 = score(doc=835,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2496898 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to examine how different aspects of an assessor's context, in particular their knowledge of a search topic, their interest in the search topic and their confidence in assessing relevance for a topic, affect the relevance judgements made and the assessor's ability to predict which documents they will assess as being relevant. Design/methodology/approach - The study was conducted as part of the Text REtrieval Conference (TREC) HARD track. Using a specially constructed questionnaire information was sought on TREC assessors' personal context and, using the TREC assessments gathered, the responses were correlated to the questionnaire questions and the final relevance decisions. Findings - This study found that each of the three factors (interest, knowledge and confidence) had an affect on how many documents were assessed as relevant and the balance between how many documents were marked as marginally or highly relevant. Also these factors are shown to affect an assessors' ability to predict what information they will finally mark as being relevant. Research limitations/implications - The major limitation is that the research is conducted within the TREC initiative. This means that we can report on results but cannot report on discussions with the assessors. The research implications are numerous but mainly on the effect of personal context on the outcomes of a user study. Practical implications - One major consequence is that we should take more account of how we construct search tasks for IIR evaluation to create tasks that are interesting and relevant to experimental subjects. Originality/value - Examining different search variables within one study to compare the relative effects on these variables on the search outcomes.
  11. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.01
    0.006276408 = product of:
      0.025105633 = sum of:
        0.025105633 = product of:
          0.050211266 = sum of:
            0.050211266 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.050211266 = score(doc=5002,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    19. 3.1996 11:22:12
  12. Sanderson, M.: ¬The Reuters test collection (1996) 0.01
    0.006276408 = product of:
      0.025105633 = sum of:
        0.025105633 = product of:
          0.050211266 = sum of:
            0.050211266 = weight(_text_:22 in 6971) [ClassicSimilarity], result of:
              0.050211266 = score(doc=6971,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.30952093 = fieldWeight in 6971, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6971)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  13. Pemberton, J.K.; Ojala, M.; Garman, N.: Head to head : searching the Web versus traditional services (1998) 0.01
    0.006276408 = product of:
      0.025105633 = sum of:
        0.025105633 = product of:
          0.050211266 = sum of:
            0.050211266 = weight(_text_:22 in 3572) [ClassicSimilarity], result of:
              0.050211266 = score(doc=3572,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.30952093 = fieldWeight in 3572, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3572)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Online. 22(1998) no.3, S.24-26,28
  14. Blagden, J.F.: How much noise in a role-free and link-free co-ordinate indexing system? (1966) 0.01
    0.005491857 = product of:
      0.021967428 = sum of:
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 2718) [ClassicSimilarity], result of:
              0.043934856 = score(doc=2718,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 2718, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2718)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of documentation. 22(1966), S.203-209
  15. Smithson, S.: Information retrieval evaluation in practice : a case study approach (1994) 0.01
    0.005491857 = product of:
      0.021967428 = sum of:
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 7302) [ClassicSimilarity], result of:
              0.043934856 = score(doc=7302,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 7302, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7302)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The evaluation of information retrieval systems is an important yet difficult operation. This paper describes an exploratory evaluation study that takes an interpretive approach to evaluation. The longitudinal study examines evaluation through the information-seeking behaviour of 22 case studies of 'real' users. The eclectic approach to data collection produced behavioral data that is compared with relevance judgements and satisfaction ratings. The study demonstrates considerable variations among the cases, among different evaluation measures within the same case, and among the same measures at different stages within a single case. It is argued that those involved in evaluation should be aware of the difficulties, and base any evaluation on a good understanding of the cases in question
  16. Losee, R.M.: Determining information retrieval and filtering performance without experimentation (1995) 0.01
    0.005491857 = product of:
      0.021967428 = sum of:
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 3368) [ClassicSimilarity], result of:
              0.043934856 = score(doc=3368,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 3368, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3368)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 2.1996 13:14:10
  17. Brown, M.E.: By any other name : accounting for failure in the naming of subject categories (1995) 0.01
    0.005491857 = product of:
      0.021967428 = sum of:
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 5598) [ClassicSimilarity], result of:
              0.043934856 = score(doc=5598,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    2.11.1996 13:08:22
  18. Mansourian, Y.; Ford, N.: Search persistence and failure on the web : a "bounded rationality" and "satisficing" analysis (2007) 0.01
    0.005228086 = product of:
      0.020912344 = sum of:
        0.020912344 = product of:
          0.041824687 = sum of:
            0.041824687 = weight(_text_:aspects in 841) [ClassicSimilarity], result of:
              0.041824687 = score(doc=841,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19975184 = fieldWeight in 841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.03125 = fieldNorm(doc=841)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to examine our current knowledge of how searchers perceive and react to the possibility of missing potentially important information whilst searching the web is limited. The study reported here seeks to investigate such perceptions and reactions, and to explore the extent to which Simon's "bounded rationality" theory is useful in illuminating these issues. Design/methodology/approach - Totally 37 academic staff, research staff and research students in three university departments were interviewed about their web searching. The open-ended, semi-structured interviews were inductively analysed. Emergence of the concept of "good enough" searching prompted a further analysis to explore the extent to which the data could be interpreted in terms of Simon's concepts of "bounded rationality" and "satisficing". Findings - The results indicate that the risk of missing potentially important information was a matter of concern to the interviewees. Their estimations of the likely extent and importance of missed information affected decisions by individuals as to when to stop searching - decisions based on very different criteria, which map well onto Simon's concepts. On the basis of the interview data, the authors propose tentative categorizations of perceptions of the risk of missing information including "inconsequential" "tolerable" "damaging" and "disastrous" and search strategies including "perfunctory" "minimalist" "nervous" and "extensive". It is concluded that there is at least a prima facie case for bounded rationality and satisficing being considered as potentially useful concepts in our quest better to understand aspects of human information behaviour. Research limitations/implications - Although the findings are based on a relatively small sample and an exploratory qualitative analysis, it is argued that the study raises a number of interesting questions, and has implications for both the development of theory and practice in the areas of web searching and information literacy. Originality/value - The paper focuses on an aspect of web searching which has not to date been well explored. Whilst research has done much to illuminate searchers' perceptions of what they find on the web, we know relatively little of their perceptions of, and reactions to information that they fail to find. The study reported here provides some tentative models, based on empirical evidence, of these phenomena.
  19. Iivonen, M.: Consistency in the selection of search concepts and search terms (1995) 0.00
    0.004707306 = product of:
      0.018829225 = sum of:
        0.018829225 = product of:
          0.03765845 = sum of:
            0.03765845 = weight(_text_:22 in 1757) [ClassicSimilarity], result of:
              0.03765845 = score(doc=1757,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.23214069 = fieldWeight in 1757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1757)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Considers intersearcher and intrasearcher consistency in the selection of search terms. Based on an empirical study where 22 searchers from 4 different types of search environments analyzed altogether 12 search requests of 4 different types in 2 separate test situations between which 2 months elapsed. Statistically very significant differences in consistency were found according to the types of search environments and search requests. Consistency was also considered according to the extent of the scope of search concept. At level I search terms were compared character by character. At level II different search terms were accepted as the same search concept with a rather simple evaluation of linguistic expressions. At level III, in addition to level II, the hierarchical approach of the search request was also controlled. At level IV different search terms were accepted as the same search concept with a broad interpretation of the search concept. Both intersearcher and intrasearcher consistency grew most immediately after a rather simple evaluation of linguistic impressions
  20. Wood, F.; Ford, N.; Miller, D.; Sobczyk, G.; Duffin, R.: Information skills, searching behaviour and cognitive styles for student-centred learning : a computer-assisted learning approach (1996) 0.00
    0.004707306 = product of:
      0.018829225 = sum of:
        0.018829225 = product of:
          0.03765845 = sum of:
            0.03765845 = weight(_text_:22 in 4341) [ClassicSimilarity], result of:
              0.03765845 = score(doc=4341,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.23214069 = fieldWeight in 4341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4341)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 22(1996) no.2, S.79-92