Search (81 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  1. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.12
    0.12149863 = product of:
      0.18224794 = sum of:
        0.13742542 = weight(_text_:citation in 3566) [ClassicSimilarity], result of:
          0.13742542 = score(doc=3566,freq=4.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.58616084 = fieldWeight in 3566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 3566) [ClassicSimilarity], result of:
              0.08964503 = score(doc=3566,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 3566, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3566)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
  2. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.06
    0.06427486 = product of:
      0.19282457 = sum of:
        0.19282457 = weight(_text_:citation in 2290) [ClassicSimilarity], result of:
          0.19282457 = score(doc=2290,freq=14.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.82245487 = fieldWeight in 2290, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.33333334 = coord(1/3)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
    Theme
    Citation indexing
  3. Thornley, C.V.; Johnson, A.C.; Smeaton, A.F.; Lee, H.: ¬The scholarly impact of TRECVid (2003-2009) (2011) 0.06
    0.0591654 = product of:
      0.0887481 = sum of:
        0.06073403 = weight(_text_:citation in 4363) [ClassicSimilarity], result of:
          0.06073403 = score(doc=4363,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.25904894 = fieldWeight in 4363, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4363)
        0.02801407 = product of:
          0.05602814 = sum of:
            0.05602814 = weight(_text_:reports in 4363) [ClassicSimilarity], result of:
              0.05602814 = score(doc=4363,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.24881059 = fieldWeight in 4363, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4363)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper reports on an investigation into the scholarly impact of the TRECVid (Text Retrieval and Evaluation Conference, Video Retrieval Evaluation) benchmarking conferences between 2003 and 2009. The contribution of TRECVid to research in video retrieval is assessed by analyzing publication content to show the development of techniques and approaches over time and by analyzing publication impact through publication numbers and citation analysis. Popular conference and journal venues for TRECVid publications are identified in terms of number of citations received. For a selection of participants at different career stages, the relative importance of TRECVid publications in terms of citations vis à vis their other publications is investigated. TRECVid, as an evaluation conference, provides data on which research teams 'scored' highly against the evaluation criteria and the relationship between 'top scoring' teams at TRECVid and the 'top scoring' papers in terms of citations is analyzed. A strong relationship was found between 'success' at TRECVid and 'success' at citations both for high scoring and low scoring teams. The implications of the study in terms of the value of TRECVid as a research activity, and the value of bibliometric analysis as a research evaluation tool, are discussed.
  4. Crestani, F.; Rijsbergen, C.J. van: Information retrieval by imaging (1996) 0.04
    0.03595905 = product of:
      0.10787715 = sum of:
        0.10787715 = sum of:
          0.06723377 = weight(_text_:reports in 6967) [ClassicSimilarity], result of:
            0.06723377 = score(doc=6967,freq=2.0), product of:
              0.2251839 = queryWeight, product of:
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.04999695 = queryNorm
              0.29857272 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
          0.04064338 = weight(_text_:22 in 6967) [ClassicSimilarity], result of:
            0.04064338 = score(doc=6967,freq=2.0), product of:
              0.1750808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04999695 = queryNorm
              0.23214069 = fieldWeight in 6967, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=6967)
      0.33333334 = coord(1/3)
    
    Abstract
    Explains briefly what constitutes the imaging process and explains how imaging can be used in information retrieval. Proposes an approach based on the concept of: 'a term is a possible world'; which enables the exploitation of term to term relationships which are estimated using an information theoretic measure. Reports results of an evaluation exercise to compare the performance of imaging retrieval, using possible world semantics, with a benchmark and using the Cranfield 2 document collection to measure precision and recall. Initially, the performance imaging retrieval was seen to be better but statistical analysis proved that the difference was not significant. The problem with imaging retrieval lies in the amount of computations needed to be performed at run time and a later experiement investigated the possibility of reducing this amount. Notes lines of further investigation
    Source
    Information retrieval: new systems and current research. Proceedings of the 16th Research Colloquium of the British Computer Society Information Retrieval Specialist Group, Drymen, Scotland, 22-23 Mar 94. Ed.: R. Leon
  5. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.04
    0.03595905 = product of:
      0.10787715 = sum of:
        0.10787715 = sum of:
          0.06723377 = weight(_text_:reports in 2552) [ClassicSimilarity], result of:
            0.06723377 = score(doc=2552,freq=2.0), product of:
              0.2251839 = queryWeight, product of:
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.04999695 = queryNorm
              0.29857272 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.503953 = idf(docFreq=1329, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
          0.04064338 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
            0.04064338 = score(doc=2552,freq=2.0), product of:
              0.1750808 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04999695 = queryNorm
              0.23214069 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  6. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.03
    0.034356356 = product of:
      0.10306907 = sum of:
        0.10306907 = weight(_text_:citation in 2288) [ClassicSimilarity], result of:
          0.10306907 = score(doc=2288,freq=4.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.4396206 = fieldWeight in 2288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.33333334 = coord(1/3)
    
    Theme
    Citation indexing
  7. Evans, D.A.; Lefferts, R.G.: CLARIT-TREC experiments (1995) 0.02
    0.018676046 = product of:
      0.05602814 = sum of:
        0.05602814 = product of:
          0.11205628 = sum of:
            0.11205628 = weight(_text_:reports in 1912) [ClassicSimilarity], result of:
              0.11205628 = score(doc=1912,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.49762118 = fieldWeight in 1912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1912)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Describes the following elements of the CLARIT system information management system: natural language processing, document indexing, vector space querying and query augmentation. Reports on the processing results carried out as part of the TREC-2 and into system parameterization. Results demonstrate high prescision and excellent recall, but the system is not yet optimized
  8. Hull, D.; Grefenstette, G.; Schulze, B.M.; Gaussier, E.; Schütze, H.; Pedersen, J.: Xerox TREC-5 site reports : routing, filtering, NLP, and Spanisch tracks (1997) 0.02
    0.018676046 = product of:
      0.05602814 = sum of:
        0.05602814 = product of:
          0.11205628 = sum of:
            0.11205628 = weight(_text_:reports in 3096) [ClassicSimilarity], result of:
              0.11205628 = score(doc=3096,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.49762118 = fieldWeight in 3096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3096)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  9. Armstrong, C.J.; Medawar, K.: Investigation into the quality of databases in general use in the UK (1996) 0.02
    0.018488344 = product of:
      0.05546503 = sum of:
        0.05546503 = product of:
          0.11093006 = sum of:
            0.11093006 = weight(_text_:reports in 6768) [ClassicSimilarity], result of:
              0.11093006 = score(doc=6768,freq=4.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.49261987 = fieldWeight in 6768, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6768)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on a Centre for Information Quality Management (CIQM) BLRRD funded project which investigated the quality of databases in general use in the UK. Gives a literature review of quality in library and information services. Reports the results of a CIQM questionnaire survey on the quality problems of databases and their affect on users. Carries out databases evaluations of: INSPEC on ESA-IRS, INSPEC on KR Data-Star, INSPEC on UMI CD-ROM, BNB on CD-ROM, and Information Science Abstracts Plus CD-ROM. Sets out a methodology for evaluation of bibliographic databases
  10. Cooper, M.D.; Chen, H.-M.: Predicting the relevance of a library catalog search (2001) 0.02
    0.016195742 = product of:
      0.048587225 = sum of:
        0.048587225 = weight(_text_:citation in 6519) [ClassicSimilarity], result of:
          0.048587225 = score(doc=6519,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.20723915 = fieldWeight in 6519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=6519)
      0.33333334 = coord(1/3)
    
    Abstract
    Relevance has been a difficult concept to define, let alone measure. In this paper, a simple operational definition of relevance is proposed for a Web-based library catalog: whether or not during a search session the user saves, prints, mails, or downloads a citation. If one of those actions is performed, the session is considered relevant to the user. An analysis is presented illustrating the advantages and disadvantages of this definition. With this definition and good transaction logging, it is possible to ascertain the relevance of a session. This was done for 905,970 sessions conducted with the University of California's Melvyl online catalog. Next, a methodology was developed to try to predict the relevance of a session. A number of variables were defined that characterize a session, none of which used any demographic information about the user. The values of the variables were computed for the sessions. Principal components analysis was used to extract a new set of variables out of the original set. A stratified random sampling technique was used to form ten strata such that each new strata of 90,570 sessions contained the same proportion of relevant to nonrelevant sessions. Logistic regression was used to ascertain the regression coefficients for nine of the ten strata. Then, the coefficients were used to predict the relevance of the sessions in the missing strata. Overall, 17.85% of the sessions were determined to be relevant. The predicted number of relevant sessions for all ten strata was 11 %, a 6.85% difference. The authors believe that the methodology can be further refined and the prediction improved. This methodology could also have significant application in improving user searching and also in predicting electronic commerce buying decisions without the use of personal demographic data
  11. Borlund, P.: ¬A study of the use of simulated work task situations in interactive information retrieval evaluations : a meta-evaluation (2016) 0.02
    0.016195742 = product of:
      0.048587225 = sum of:
        0.048587225 = weight(_text_:citation in 2880) [ClassicSimilarity], result of:
          0.048587225 = score(doc=2880,freq=2.0), product of:
            0.23445003 = queryWeight, product of:
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.04999695 = queryNorm
            0.20723915 = fieldWeight in 2880, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6892867 = idf(docFreq=1104, maxDocs=44218)
              0.03125 = fieldNorm(doc=2880)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement. Design/methodology/approach - In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis. Findings - The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants' personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies. Research limitations/implications - Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies. Originality/value - Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990's. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.
  12. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.09483455 = score(doc=262,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    20.10.2000 12:22:23
  13. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.09483455 = score(doc=6418,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Online. 22(1998) no.6, S.57-58
  14. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.09483455 = score(doc=6438,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    11. 8.2001 16:22:19
  15. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.02
    0.015805759 = product of:
      0.047417276 = sum of:
        0.047417276 = product of:
          0.09483455 = sum of:
            0.09483455 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.09483455 = score(doc=5089,freq=2.0), product of:
                0.1750808 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04999695 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 18:43:54
  16. Taghva, K.: ¬The effects of noisy data on text retrieval (1994) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 7227) [ClassicSimilarity], result of:
              0.08964503 = score(doc=7227,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 7227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7227)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports of the results of experiments on query evaluation on the presence of noisy data, in particular, an OCR-generated database and its corresponding 99.8 % correct version are used to process a set of queries to determine the effect the degraded version will have on retrieval. With the set of scientific documents used in the testing, the effect is insignificant. Improves the result by applying an automatic postprocessing system designed to correct the kinds of errors generated by recognition devices
  17. Clarke, S.J.; Willett, P.: Estimating the recall performance of Web search engines (1997) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 760) [ClassicSimilarity], result of:
              0.08964503 = score(doc=760,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=760)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports a comparison of the retrieval effectiveness of the AltaVista, Excite and Lycos Web search engines. Describes a method for comparing the recall of the 3 sets of searches, despite the fact that they are carried out on non identical sets of Web pages. It is thus possible, unlike previous comparative studies of Web search engines, to consider both recall and precision when evaluating the effectiveness of search engines
  18. Harman, D.K.: ¬The first text retrieval conference : TREC-1, 1992 (1993) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 1317) [ClassicSimilarity], result of:
              0.08964503 = score(doc=1317,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 1317, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1317)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports on the 1st Text Retrieval Conference (TREC-1) held in Rockville, MD, 4-6 Nov. 1992. The TREC experiment is being run by the National Institute of Standards and Technology to allow information retrieval researchers to scale up from small collection of data to larger sized experiments. Gropus of researchers have been provided with text documents compressed on CD-ROM. They used experimental retrieval system to search the text and evaluate the results
  19. Ekmekcioglu, F.C.; Robertson, A.M.; Willett, P.: Effectiveness of query expansion in ranked-output document retrieval systems (1992) 0.01
    0.014940838 = product of:
      0.044822514 = sum of:
        0.044822514 = product of:
          0.08964503 = sum of:
            0.08964503 = weight(_text_:reports in 5689) [ClassicSimilarity], result of:
              0.08964503 = score(doc=5689,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.39809695 = fieldWeight in 5689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5689)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports an evaluation of 3 methods for the expansion of natural language queries in ranked output retrieval systems. The methods are based on term co-occurrence data, on Soundex codes, and on a string similarity measure. Searches for 110 queries in a data base of 26.280 titles and abstracts suggest that there is no significant difference in retrieval effectiveness between any of these methods and unexpanded searches
  20. Qiu, L.: Analytical searching vs. browsing in hypertext information retrieval systems (1993) 0.01
    0.013073232 = product of:
      0.039219696 = sum of:
        0.039219696 = product of:
          0.07843939 = sum of:
            0.07843939 = weight(_text_:reports in 7416) [ClassicSimilarity], result of:
              0.07843939 = score(doc=7416,freq=2.0), product of:
                0.2251839 = queryWeight, product of:
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.04999695 = queryNorm
                0.34833482 = fieldWeight in 7416, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.503953 = idf(docFreq=1329, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=7416)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Reports an experiment conducted to study search behaviour of different user groups in a hypertext information retrieval system. A three-way analysis of variance test was conducted to study the effects of gender, search task, and search experience on search option (analytical searching versus browsing), as measured by the proportion of nodes reached through analytical searching. The search task factor influenced search option in that a general task caused more browsing and specific task more analytical searching. Gender or search experience alone did not affect the search option. These findings are discussed in light of evaluation of existing systems and implications for future design

Years

Languages

  • e 75
  • d 3
  • f 1
  • fi 1
  • More… Less…

Types

  • a 73
  • s 5
  • m 3
  • el 2
  • r 1
  • More… Less…