Search (13 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Retrievalstudien"
  • × year_i:[2000 TO 2010}
  1. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.02327815 = product of:
      0.0465563 = sum of:
        0.0465563 = product of:
          0.0931126 = sum of:
            0.0931126 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.0931126 = score(doc=6438,freq=2.0), product of:
                0.17190179 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049089137 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 8.2001 16:22:19
  2. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.01
    0.011955574 = product of:
      0.023911148 = sum of:
        0.023911148 = product of:
          0.09564459 = sum of:
            0.09564459 = weight(_text_:authors in 4587) [ClassicSimilarity], result of:
              0.09564459 = score(doc=4587,freq=4.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.42738882 = fieldWeight in 4587, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4587)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
  3. Kilgour, F.G.; Moran, B.B.: Surname plus recallable title word searches for known items by scholars (2000) 0.01
    0.011271823 = product of:
      0.022543646 = sum of:
        0.022543646 = product of:
          0.090174586 = sum of:
            0.090174586 = weight(_text_:authors in 4296) [ClassicSimilarity], result of:
              0.090174586 = score(doc=4296,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.40294603 = fieldWeight in 4296, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4296)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This experiment searches an online library catalog employing author surnames, plus title words of books in citations of 8 scholarly works whose authors selected the title words used as being recallable. Searches comprising surname together with two recallable title words, or one if only one was available, yielded a single-screen miniature catalog (minicat) 99.0% of the time
  4. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.01
    0.00997635 = product of:
      0.0199527 = sum of:
        0.0199527 = product of:
          0.0399054 = sum of:
            0.0399054 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
              0.0399054 = score(doc=2552,freq=2.0), product of:
                0.17190179 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049089137 = queryNorm
                0.23214069 = fieldWeight in 2552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2552)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    9. 2.1997 18:44:22
  5. Kilgour, F.: ¬An experiment using coordinate title word searches (2004) 0.01
    0.009862845 = product of:
      0.01972569 = sum of:
        0.01972569 = product of:
          0.07890276 = sum of:
            0.07890276 = weight(_text_:authors in 2065) [ClassicSimilarity], result of:
              0.07890276 = score(doc=2065,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.35257778 = fieldWeight in 2065, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2065)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This study, the fourth and last of a series designed to produce new information to improve retrievability of books in libraries, explores the effectiveness of retrieving a known-item book using words from titles only. From daily printouts of circulation records at the Walter Royal Davis Library of the University of North Carolina at Chapel Hill, 749 titles were taken and then searched an the 4-million entry catalog at the library of the University of Michigan. The principal finding was that searches produced titles having personal authors 81.4% of the time and anonymous titles 91.5% of the time; these figures are 15 and 5%, respectively, lower than the lowest findings presented in the previous three articles of this series (Kilgour, 1995; 1997; 2001).
  6. Oppenheim, C.; Morris, A.; McKnight, C.: ¬The evaluation of WWW search engines (2000) 0.01
    0.008453867 = product of:
      0.016907735 = sum of:
        0.016907735 = product of:
          0.06763094 = sum of:
            0.06763094 = weight(_text_:authors in 4546) [ClassicSimilarity], result of:
              0.06763094 = score(doc=4546,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.30220953 = fieldWeight in 4546, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4546)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The literature of the evaluation of Internet search engines is reviewed. Although there have been many studies, there has been little consistency in the way such studies have been carried out. This problem is exacerbated by the fact that recall is virtually impossible to calculate in the fast changing Internet environment, and therefore the traditional Cranfield type of evaluation is not usually possible. A variety of alternative evaluation methods has been suggested to overcome this difficulty. The authors recommend that a standardised set of tools is developed for the evaluation of web search engines so that, in future, comparisons can be made between search engines more effectively, and that variations in performance of any given search engine over time can be tracked. The paper itself does not provide such a standard set of tools, but it investigates the issues and makes preliminary recommendations of the types of tools needed
  7. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.01
    0.008313625 = product of:
      0.01662725 = sum of:
        0.01662725 = product of:
          0.0332545 = sum of:
            0.0332545 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.0332545 = score(doc=1184,freq=2.0), product of:
                0.17190179 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049089137 = queryNorm
                0.19345059 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  8. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.01
    0.008313625 = product of:
      0.01662725 = sum of:
        0.01662725 = product of:
          0.0332545 = sum of:
            0.0332545 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
              0.0332545 = score(doc=2026,freq=2.0), product of:
                0.17190179 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049089137 = queryNorm
                0.19345059 = fieldWeight in 2026, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2026)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  9. Larsen, B.; Ingwersen, P.; Lund, B.: Data fusion according to the principle of polyrepresentation (2009) 0.01
    0.0066509005 = product of:
      0.013301801 = sum of:
        0.013301801 = product of:
          0.026603602 = sum of:
            0.026603602 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
              0.026603602 = score(doc=2752,freq=2.0), product of:
                0.17190179 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049089137 = queryNorm
                0.15476047 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 18:48:28
  10. Cooper, M.D.; Chen, H.-M.: Predicting the relevance of a library catalog search (2001) 0.01
    0.0056359116 = product of:
      0.011271823 = sum of:
        0.011271823 = product of:
          0.045087293 = sum of:
            0.045087293 = weight(_text_:authors in 6519) [ClassicSimilarity], result of:
              0.045087293 = score(doc=6519,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.20147301 = fieldWeight in 6519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6519)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Relevance has been a difficult concept to define, let alone measure. In this paper, a simple operational definition of relevance is proposed for a Web-based library catalog: whether or not during a search session the user saves, prints, mails, or downloads a citation. If one of those actions is performed, the session is considered relevant to the user. An analysis is presented illustrating the advantages and disadvantages of this definition. With this definition and good transaction logging, it is possible to ascertain the relevance of a session. This was done for 905,970 sessions conducted with the University of California's Melvyl online catalog. Next, a methodology was developed to try to predict the relevance of a session. A number of variables were defined that characterize a session, none of which used any demographic information about the user. The values of the variables were computed for the sessions. Principal components analysis was used to extract a new set of variables out of the original set. A stratified random sampling technique was used to form ten strata such that each new strata of 90,570 sessions contained the same proportion of relevant to nonrelevant sessions. Logistic regression was used to ascertain the regression coefficients for nine of the ten strata. Then, the coefficients were used to predict the relevance of the sessions in the missing strata. Overall, 17.85% of the sessions were determined to be relevant. The predicted number of relevant sessions for all ten strata was 11 %, a 6.85% difference. The authors believe that the methodology can be further refined and the prediction improved. This methodology could also have significant application in improving user searching and also in predicting electronic commerce buying decisions without the use of personal demographic data
  11. Mansourian, Y.; Ford, N.: Web searchers' attributions of success and failure: an empirical study (2007) 0.01
    0.0056359116 = product of:
      0.011271823 = sum of:
        0.011271823 = product of:
          0.045087293 = sum of:
            0.045087293 = weight(_text_:authors in 840) [ClassicSimilarity], result of:
              0.045087293 = score(doc=840,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.20147301 = fieldWeight in 840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=840)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper reports the findings of a study designed to explore web searchers' perceptions of the causes of their search failure and success. In particular, it seeks to discover the extent to which the constructs locus of control and attribution theory might provide useful frameworks for understanding searchers' perceptions. Design/methodology/approach - A combination of inductive and deductive approaches were employed. Perceptions of failed and successful searches were derived from the inductive analysis of using open-ended qualitative interviews with a sample of 37 biologists at the University of Sheffield. These perceptions were classified into "internal" and "external" attributions, and the relationships between these categories and "successful" and "failed" searches were analysed deductively to test the extent to which they might be explainable using locus of control and attribution theory interpretive frameworks. Findings - All searchers were readily able to recall "successful" and "unsuccessful" searches. In a large majority of cases (82.4 per cent), they clearly attributed each search to either internal (e.g. ability or effort) or external (e.g. luck or information not being available) factors. The pattern of such relationships was analysed, and mapped onto those that would be predicted by locus of control and attribution theory. The authors conclude that the potential of these theoretical frameworks to illuminate one's understanding of web searching, and associated training, merits further systematic study. Research limitations/implications - The findings are based on a relatively small sample of academic and research staff in a particular subject area. Importantly, also, the study can at best provide a prima facie case for further systematic study since, although the patterns of attribution behaviour accord with those predictable by locus of control and attribution theory, data relating to the predictive elements of these theories (e.g. levels of confidence and achievement) were not available. This issue is discussed, and recommendations made for further work. Originality/value - The findings provide some empirical support for the notion that locus of control and attribution theory might - subject to the limitations noted above - be potentially useful theoretical frameworks for helping us better understand web-based information seeking. If so, they could have implications particularly for better understanding of searchers' motivations, and for the design and development of more effective search training programmes.
  12. Mansourian, Y.; Ford, N.: Search persistence and failure on the web : a "bounded rationality" and "satisficing" analysis (2007) 0.01
    0.0056359116 = product of:
      0.011271823 = sum of:
        0.011271823 = product of:
          0.045087293 = sum of:
            0.045087293 = weight(_text_:authors in 841) [ClassicSimilarity], result of:
              0.045087293 = score(doc=841,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.20147301 = fieldWeight in 841, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=841)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper aims to examine our current knowledge of how searchers perceive and react to the possibility of missing potentially important information whilst searching the web is limited. The study reported here seeks to investigate such perceptions and reactions, and to explore the extent to which Simon's "bounded rationality" theory is useful in illuminating these issues. Design/methodology/approach - Totally 37 academic staff, research staff and research students in three university departments were interviewed about their web searching. The open-ended, semi-structured interviews were inductively analysed. Emergence of the concept of "good enough" searching prompted a further analysis to explore the extent to which the data could be interpreted in terms of Simon's concepts of "bounded rationality" and "satisficing". Findings - The results indicate that the risk of missing potentially important information was a matter of concern to the interviewees. Their estimations of the likely extent and importance of missed information affected decisions by individuals as to when to stop searching - decisions based on very different criteria, which map well onto Simon's concepts. On the basis of the interview data, the authors propose tentative categorizations of perceptions of the risk of missing information including "inconsequential" "tolerable" "damaging" and "disastrous" and search strategies including "perfunctory" "minimalist" "nervous" and "extensive". It is concluded that there is at least a prima facie case for bounded rationality and satisficing being considered as potentially useful concepts in our quest better to understand aspects of human information behaviour. Research limitations/implications - Although the findings are based on a relatively small sample and an exploratory qualitative analysis, it is argued that the study raises a number of interesting questions, and has implications for both the development of theory and practice in the areas of web searching and information literacy. Originality/value - The paper focuses on an aspect of web searching which has not to date been well explored. Whilst research has done much to illuminate searchers' perceptions of what they find on the web, we know relatively little of their perceptions of, and reactions to information that they fail to find. The study reported here provides some tentative models, based on empirical evidence, of these phenomena.
  13. TREC: experiment and evaluation in information retrieval (2005) 0.00
    0.0035224447 = product of:
      0.0070448895 = sum of:
        0.0070448895 = product of:
          0.028179558 = sum of:
            0.028179558 = weight(_text_:authors in 636) [ClassicSimilarity], result of:
              0.028179558 = score(doc=636,freq=2.0), product of:
                0.22378825 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.049089137 = queryNorm
                0.12592064 = fieldWeight in 636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=636)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Footnote
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."