Search (89 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  1. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.09
    0.09054384 = product of:
      0.21126895 = sum of:
        0.070699565 = weight(_text_:united in 1184) [ClassicSimilarity], result of:
          0.070699565 = score(doc=1184,freq=2.0), product of:
            0.22812355 = queryWeight, product of:
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.04066292 = queryNorm
            0.30991787 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6101127 = idf(docFreq=439, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.068113975 = weight(_text_:states in 1184) [ClassicSimilarity], result of:
          0.068113975 = score(doc=1184,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.304198 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.072455406 = sum of:
          0.044909086 = weight(_text_:design in 1184) [ClassicSimilarity], result of:
            0.044909086 = score(doc=1184,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.29373983 = fieldWeight in 1184, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.027546322 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.027546322 = score(doc=1184,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.42857143 = coord(3/7)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  2. Jansen, B.J.; McNeese, M.D.: Evaluating the Effectiveness of and Patterns of Interactions With Automated Searching Assistance (2005) 0.02
    0.02399764 = product of:
      0.083991736 = sum of:
        0.068113975 = weight(_text_:states in 4815) [ClassicSimilarity], result of:
          0.068113975 = score(doc=4815,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.304198 = fieldWeight in 4815, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4815)
        0.01587776 = product of:
          0.03175552 = sum of:
            0.03175552 = weight(_text_:design in 4815) [ClassicSimilarity], result of:
              0.03175552 = score(doc=4815,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.20770542 = fieldWeight in 4815, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4815)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    We report quantitative and qualitative results of an empirical evaluation to determine whether automated assistance improves searching performance and when searchers desire system intervention in the search process. Forty participants interacted with two fully functional information retrieval systems in a counterbalanced, within-participant study. The systems were identical in all respects except that one offered automated assistance and the other did not. The study used a client-side automated assistance application, an approximately 500,000-document Text REtrieval Conference content collection, and six topics. Results indicate that automated assistance can improve searching performance. However, the improvement is less dramatic than one might expect, with an approximately 20% performance increase, as measured by the number of userselected relevant documents. Concerning patterns of interaction, we identified 1,879 occurrences of searchersystem interactions and classified them into 9 major categories and 27 subcategories or states. Results indicate that there are predictable patterns of times when searchers desire and implement searching assistance. The most common three-state pattern is Execute Query-View Results: With Scrolling-View Assistance. Searchers appear receptive to automated assistance; there is a 71% implementation rate. There does not seem to be a correlation between the use of assistance and previous searching performance. We discuss the implications for the design of information retrieval systems and future research directions.
  3. Balog, K.; Schuth, A.; Dekker, P.; Tavakolpoursaleh, N.; Schaer, P.; Chuang, P.-Y.: Overview of the TREC 2016 Open Search track Academic Search Edition (2016) 0.02
    0.015568908 = product of:
      0.108982354 = sum of:
        0.108982354 = weight(_text_:states in 43) [ClassicSimilarity], result of:
          0.108982354 = score(doc=43,freq=2.0), product of:
            0.22391328 = queryWeight, product of:
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.04066292 = queryNorm
            0.48671678 = fieldWeight in 43, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.506572 = idf(docFreq=487, maxDocs=44218)
              0.0625 = fieldNorm(doc=43)
      0.14285715 = coord(1/7)
    
    Source
    TREC 2016, Gaithersburg, Unites States
  4. Lazonder, A.W.; Biemans, H.J.A.; Wopereis, I.G.J.H.: Differences between novice and experienced users in searching information on the World Wide Web (2000) 0.01
    0.01488273 = product of:
      0.10417911 = sum of:
        0.10417911 = weight(_text_:sites in 4598) [ClassicSimilarity], result of:
          0.10417911 = score(doc=4598,freq=4.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.49009097 = fieldWeight in 4598, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
      0.14285715 = coord(1/7)
    
    Abstract
    Searching for information on the WWW basically comes down to locating an appropriate Web site and to retrieving relevant information from that site. This study examined the effect of a user's WWW experience on both phases of the search process. 35 students from 2 schools for Dutch pre-university education were observed while performing 3 search tasks. The results indicate that subjects with WWW-experience are more proficient in locating Web sites than are novice WWW-users. The observed differences were ascribed to the experts' superior skills in operating Web search engines. However, on tasks that required subjects to locate information on specific Web sites, the performance of experienced and novice users was equivalent - a result that is in line with hypertext research. Based on these findings, implications for training and supporting students in searching for information on the WWW are identified. Finally, the role of the subjects' level of domain expertise is discussed and directions for future research are proposed
  5. Chu, H.: Factors affecting relevance judgment : a report from TREC Legal track (2011) 0.01
    0.010350773 = product of:
      0.072455406 = sum of:
        0.072455406 = sum of:
          0.044909086 = weight(_text_:design in 4540) [ClassicSimilarity], result of:
            0.044909086 = score(doc=4540,freq=4.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.29373983 = fieldWeight in 4540, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4540)
          0.027546322 = weight(_text_:22 in 4540) [ClassicSimilarity], result of:
            0.027546322 = score(doc=4540,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 4540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4540)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - This study intends to identify factors that affect relevance judgment of retrieved information as part of the 2007 TREC Legal track interactive task. Design/methodology/approach - Data were gathered and analyzed from the participants of the 2007 TREC Legal track interactive task using a questionnaire which includes not only a list of 80 relevance factors identified in prior research, but also a space for expressing their thoughts on relevance judgment in the process. Findings - This study finds that topicality remains a primary criterion, out of various options, for determining relevance, while specificity of the search request, task, or retrieved results also helps greatly in relevance judgment. Research limitations/implications - Relevance research should focus on the topicality and specificity of what is being evaluated as well as conducted in real environments. Practical implications - If multiple relevance factors are presented to assessors, the total number in a list should be below ten to take account of the limited processing capacity of human beings' short-term memory. Otherwise, the assessors might either completely ignore or inadequately consider some of the relevance factors when making judgment decisions. Originality/value - This study presents a method for reducing the artificiality of relevance research design, an apparent limitation in many related studies. Specifically, relevance judgment was made in this research as part of the 2007 TREC Legal track interactive task rather than a study devised for the sake of it. The assessors also served as searchers so that their searching experience would facilitate their subsequent relevance judgments.
    Date
    12. 7.2011 18:29:22
  6. Radev, D.R.; Libner, K.; Fan, W.: Getting answers to natural language questions on the Web (2002) 0.01
    0.008769733 = product of:
      0.061388128 = sum of:
        0.061388128 = weight(_text_:sites in 5204) [ClassicSimilarity], result of:
          0.061388128 = score(doc=5204,freq=2.0), product of:
            0.21257097 = queryWeight, product of:
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.04066292 = queryNorm
            0.28878886 = fieldWeight in 5204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.227637 = idf(docFreq=644, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.14285715 = coord(1/7)
    
    Abstract
    Seven hundred natural language questions from TREC-8 and TREC-9 were sent by Radev, Libner, and Fan to each of nine web search engines. The top 40 sites returned by each system were stored for evaluation of their productivity of correct answers. Each question per engine was scored as the sum of the reciprocal ranks of identified correct answers. The large number of zero scores gave a positive skew violating the normality assumption for ANOVA, so values were transformed to zero for no hit and one for one or more hits. The non-zero values were then square-root transformed to remove the remaining positive skew. Interactions were observed between search engine and answer type (name, place, date, et cetera), search engine and number of proper nouns in the query, search engine and the need for time limitation, and search engine and total query words. All effects were significant. Shortest queries had the highest mean scores. One or more proper nouns present provides a significant advantage. Non-time dependent queries have an advantage. Place, name, person, and text description had mean scores between .85 and .9 with date at .81 and number at .59. There were significant differences in score by search engine. Search engines found at least one correct answer in between 87.7 and 75.45 of the cases. Google and Northern Light were just short of a 90% hit rate. No evidence indicated that a particular engine was better at answering any particular sort of question.
  7. Wildemuth, B.; Freund, L.; Toms, E.G.: Untangling search task complexity and difficulty in the context of interactive information retrieval studies (2014) 0.01
    0.008471692 = product of:
      0.05930184 = sum of:
        0.05930184 = sum of:
          0.03175552 = weight(_text_:design in 1786) [ClassicSimilarity], result of:
            0.03175552 = score(doc=1786,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.20770542 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
          0.027546322 = weight(_text_:22 in 1786) [ClassicSimilarity], result of:
            0.027546322 = score(doc=1786,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 1786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1786)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - One core element of interactive information retrieval (IIR) experiments is the assignment of search tasks. The purpose of this paper is to provide an analytical review of current practice in developing those search tasks to test, observe or control task complexity and difficulty. Design/methodology/approach - Over 100 prior studies of IIR were examined in terms of how each defined task complexity and/or difficulty (or related concepts) and subsequently interpreted those concepts in the development of the assigned search tasks. Findings - Search task complexity is found to include three dimensions: multiplicity of subtasks or steps, multiplicity of facets, and indeterminability. Search task difficulty is based on an interaction between the search task and the attributes of the searcher or the attributes of the search situation. The paper highlights the anomalies in our use of these two concepts, concluding with suggestions for future methodological research related to search task complexity and difficulty. Originality/value - By analyzing and synthesizing current practices, this paper provides guidance for future experiments in IIR that involve these two constructs.
    Date
    6. 4.2015 19:31:22
  8. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.01
    0.008471692 = product of:
      0.05930184 = sum of:
        0.05930184 = sum of:
          0.03175552 = weight(_text_:design in 2587) [ClassicSimilarity], result of:
            0.03175552 = score(doc=2587,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.20770542 = fieldWeight in 2587, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2587)
          0.027546322 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
            0.027546322 = score(doc=2587,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 2587, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2587)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  9. Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019) 0.01
    0.008471692 = product of:
      0.05930184 = sum of:
        0.05930184 = sum of:
          0.03175552 = weight(_text_:design in 5287) [ClassicSimilarity], result of:
            0.03175552 = score(doc=5287,freq=2.0), product of:
              0.15288728 = queryWeight, product of:
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.04066292 = queryNorm
              0.20770542 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7598698 = idf(docFreq=2798, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5287)
          0.027546322 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
            0.027546322 = score(doc=5287,freq=2.0), product of:
              0.14239462 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04066292 = queryNorm
              0.19345059 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5287)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose The effort in addition to relevance is a major factor for satisfaction and utility of the document to the actual user. The purpose of this paper is to propose a method in generating relevance judgments that incorporate effort without human judges' involvement. Then the study determines the variation in system rankings due to low effort relevance judgment in evaluating retrieval systems at different depth of evaluation. Design/methodology/approach Effort-based relevance judgments are generated using a proposed boxplot approach for simple document features, HTML features and readability features. The boxplot approach is a simple yet repeatable approach in classifying documents' effort while ensuring outlier scores do not skew the grading of the entire set of documents. Findings The retrieval systems evaluation using low effort relevance judgments has a stronger influence on shallow depth of evaluation compared to deeper depth. It is proved that difference in the system rankings is due to low effort documents and not the number of relevant documents. Originality/value Hence, it is crucial to evaluate retrieval systems at shallow depth using low effort relevance judgments.
    Date
    20. 1.2015 18:30:22
  10. Aitchison, T.M.: Comparative evaluation of index languages : Part I, Design. Part II, Results (1969) 0.01
    0.0063511035 = product of:
      0.044457722 = sum of:
        0.044457722 = product of:
          0.088915445 = sum of:
            0.088915445 = weight(_text_:design in 561) [ClassicSimilarity], result of:
              0.088915445 = score(doc=561,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.58157516 = fieldWeight in 561, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.109375 = fieldNorm(doc=561)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  11. Chen, H.; Dhar, V.: Cognitive process as a basis for intelligent retrieval system design (1991) 0.01
    0.0062859626 = product of:
      0.044001736 = sum of:
        0.044001736 = product of:
          0.08800347 = sum of:
            0.08800347 = weight(_text_:design in 3845) [ClassicSimilarity], result of:
              0.08800347 = score(doc=3845,freq=6.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.57561016 = fieldWeight in 3845, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3845)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    2 studies were conducted to investigate the cognitive processes involved in online document-based information retrieval. These studies led to the development of 5 computerised models of online document retrieval. These models were incorporated into a design of an 'intelligent' document-based retrieval system. Following a discussion of this system, discusses the broader implications of the research for the design of information retrieval sysems
  12. Serrano Cobos, J.; Quintero Orta, A.: Design, development and management of an information recovery system for an Internet Website : from documentary theory to practice (2003) 0.01
    0.0060863574 = product of:
      0.0426045 = sum of:
        0.0426045 = product of:
          0.085209 = sum of:
            0.085209 = weight(_text_:design in 2726) [ClassicSimilarity], result of:
              0.085209 = score(doc=2726,freq=10.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.55733216 = fieldWeight in 2726, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2726)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    A real case study is shown, explaining in a timeline the whole process of design, development and evaluation of a search engine used as a navigational help tool for end users and clients an a content website, e-commerce driven. The nature of the website is a community website, which will determine the core design of the information service. This study will involve several steps, such as information recovery system analysis, comparative analysis of other commercial search engines, service design, functionalities and scope; software selection, design of the project, project management, future service administration and conclusions.
  13. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.0055092643 = product of:
      0.03856485 = sum of:
        0.03856485 = product of:
          0.0771297 = sum of:
            0.0771297 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.0771297 = score(doc=262,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    20.10.2000 12:22:23
  14. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.0055092643 = product of:
      0.03856485 = sum of:
        0.03856485 = product of:
          0.0771297 = sum of:
            0.0771297 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.0771297 = score(doc=6418,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Online. 22(1998) no.6, S.57-58
  15. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.0055092643 = product of:
      0.03856485 = sum of:
        0.03856485 = product of:
          0.0771297 = sum of:
            0.0771297 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.0771297 = score(doc=6438,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    11. 8.2001 16:22:19
  16. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.0055092643 = product of:
      0.03856485 = sum of:
        0.03856485 = product of:
          0.0771297 = sum of:
            0.0771297 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.0771297 = score(doc=5089,freq=2.0), product of:
                0.14239462 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04066292 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 18:43:54
  17. Carterette, B.: Test collections (2009) 0.01
    0.0051324675 = product of:
      0.03592727 = sum of:
        0.03592727 = product of:
          0.07185454 = sum of:
            0.07185454 = weight(_text_:design in 3891) [ClassicSimilarity], result of:
              0.07185454 = score(doc=3891,freq=4.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.46998373 = fieldWeight in 3891, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3891)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Research and development of search engines and other information retrieval (IR) systems proceeds by a cycle of design, implementation, and experimentation, with the results of each experiment influencing design decisions in the next iteration of the cycle. Batch experiments on test collections help ensure that this process goes as smoothly and as quickly as possible. A test collection comprises a collection of documents, a set of information needs, and judgments of the relevance of documents to those needs.
  18. Spink, A.: Term relevance feedback and mediated database searching : implications for information retrieval practice and systems design (1995) 0.00
    0.004714472 = product of:
      0.033001304 = sum of:
        0.033001304 = product of:
          0.06600261 = sum of:
            0.06600261 = weight(_text_:design in 1756) [ClassicSimilarity], result of:
              0.06600261 = score(doc=1756,freq=6.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.43170762 = fieldWeight in 1756, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1756)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Research into both the algorithmic and human approaches to information retrieval is required to improve information retrieval system design and database searching effectiveness. Uses the human approach to examine the sources and effectiveness of search terms selected during mediated interactive information retrieval. Focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. Results show that termns selected from particular database fields of retrieved items during term relevance feedback (TRF) were more effective than search terms from the intermediarity, database thesauri or users' domain knowledge during the interaction, but not as effective as terms from the users' written question statements. Implications for the design and testing of automatic relevance feedback techniques that place greater emphasis on these sources and the practice of database searching are also discussed
  19. King, D.W.; Bryant, E.C.: ¬The evaluation of information services and products (1971) 0.00
    0.004536503 = product of:
      0.03175552 = sum of:
        0.03175552 = product of:
          0.06351104 = sum of:
            0.06351104 = weight(_text_:design in 4157) [ClassicSimilarity], result of:
              0.06351104 = score(doc=4157,freq=2.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.41541085 = fieldWeight in 4157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4157)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Content
    Covers the evaluative and control aspects of: dclassification and indexing processes and languages; document screening processes; composition, reproduction, acquisition, storage, and presentation; usersystem interfaces. Also contains brief and lucid primers on user surveys, statistics, sampling methods, and experimental design.
  20. Spink, A.; Goodrum, A.: ¬A study of search intermediary working notes : implications for IR system design (1996) 0.00
    0.0044909082 = product of:
      0.031436358 = sum of:
        0.031436358 = product of:
          0.062872715 = sum of:
            0.062872715 = weight(_text_:design in 6981) [ClassicSimilarity], result of:
              0.062872715 = score(doc=6981,freq=4.0), product of:
                0.15288728 = queryWeight, product of:
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.04066292 = queryNorm
                0.41123575 = fieldWeight in 6981, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7598698 = idf(docFreq=2798, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6981)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports findings from an explanatory study investigating working notes created during encoding and external storage (EES) processes, by human search intermediaries using a Boolean information retrieval systems. Analysis of 221 sets of working notes created by human search intermediaries revealed extensive use of EES processes and the creation of working notes of textual, numerical and graphical entities. Nearly 70% of recorded working noted were textual/numerical entities, nearly 30 were graphical entities and 0,73% were indiscernible. Segmentation devices were also used in 48% of the working notes. The creation of working notes during the EES processes was a fundamental element within the mediated, interactive information retrieval process. Discusses implications for the design of interfaces to support users' EES processes and further research

Years

Languages

  • e 84
  • d 3
  • f 1
  • More… Less…

Types

  • a 82
  • m 4
  • s 3
  • el 1
  • r 1
  • More… Less…