Search (60 results, page 1 of 3)

  • × theme_ss:"Retrievalstudien"
  1. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.04
    0.039468158 = product of:
      0.078936316 = sum of:
        0.05696889 = weight(_text_:social in 5001) [ClassicSimilarity], result of:
          0.05696889 = score(doc=5001,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.30839854 = fieldWeight in 5001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.021967428 = product of:
          0.043934856 = sum of:
            0.043934856 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.043934856 = score(doc=5001,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  2. Leininger, K.: Interindexer consistency in PsychINFO (2000) 0.03
    0.031595502 = product of:
      0.12638201 = sum of:
        0.12638201 = sum of:
          0.08872356 = weight(_text_:aspects in 2552) [ClassicSimilarity], result of:
            0.08872356 = score(doc=2552,freq=4.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.42373765 = fieldWeight in 2552, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
          0.03765845 = weight(_text_:22 in 2552) [ClassicSimilarity], result of:
            0.03765845 = score(doc=2552,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.23214069 = fieldWeight in 2552, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2552)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study to examine interindexer consistency (the degree to which indexers, when assigning terms to a chosen record, will choose the same terms to reflect that record) in the PsycINFO database using 60 records that were inadvertently processed twice between 1996 and 1998. Five aspects of interindexer consistency were analysed. Two methods were used to calculate interindexer consistency: one posited by Hooper (1965) and the other by Rollin (1981). Aspects analysed were: checktag consistency (66.24% using Hooper's calculation and 77.17% using Rollin's); major-to-all term consistency (49.31% and 62.59% respectively); overall indexing consistency (49.02% and 63.32%); classification code consistency (44.17% and 45.00%); and major-to-major term consistency (43.24% and 56.09%). The average consistency across all categories was 50.4% using Hooper's method and 60.83% using Rollin's. Although comparison with previous studies is difficult due to methodological variations in the overall study of indexing consistency and the specific characteristics of the database, results generally support previous findings when trends and similar studies are analysed.
    Date
    9. 2.1997 18:44:22
  3. Blair, D.C.: STAIRS Redux : thoughts on the STAIRS evaluation, ten years after (1996) 0.03
    0.029282015 = product of:
      0.11712806 = sum of:
        0.11712806 = sum of:
          0.07319321 = weight(_text_:aspects in 3002) [ClassicSimilarity], result of:
            0.07319321 = score(doc=3002,freq=2.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.3495657 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
          0.043934856 = weight(_text_:22 in 3002) [ClassicSimilarity], result of:
            0.043934856 = score(doc=3002,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.2708308 = fieldWeight in 3002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3002)
      0.25 = coord(1/4)
    
    Abstract
    The test of retrieval effectiveness performed on IBM's STAIRS and reported in 'Communications of the ACM' 10 years ago, continues to be cited frequently in the information retrieval literature. The reasons for the study's continuing pertinence to today's research are discussed, and the political, legal, and commercial aspects of the study are presented. In addition, the method of calculating recall that was used in the STAIRS study is discussed in some detail, especially how it reduces the 5 major types of uncertainty in recall estimations. It is also suggested that this method of recall estimation may serve as the basis for recall estimations that might be truly comparable between systems
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.4-22
  4. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.02
    0.020915726 = product of:
      0.083662905 = sum of:
        0.083662905 = sum of:
          0.052280862 = weight(_text_:aspects in 2339) [ClassicSimilarity], result of:
            0.052280862 = score(doc=2339,freq=2.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.2496898 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
          0.031382043 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
            0.031382043 = score(doc=2339,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.19345059 = fieldWeight in 2339, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2339)
      0.25 = coord(1/4)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
  5. King, D.W.: Blazing new trails : in celebration of an audacious career (2000) 0.02
    0.020915726 = product of:
      0.083662905 = sum of:
        0.083662905 = sum of:
          0.052280862 = weight(_text_:aspects in 1184) [ClassicSimilarity], result of:
            0.052280862 = score(doc=1184,freq=2.0), product of:
              0.20938325 = queryWeight, product of:
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.046325076 = queryNorm
              0.2496898 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.5198684 = idf(docFreq=1308, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
          0.031382043 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.031382043 = score(doc=1184,freq=2.0), product of:
              0.16222252 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046325076 = queryNorm
              0.19345059 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1184)
      0.25 = coord(1/4)
    
    Abstract
    I had the distinct pleasure of working with Pauline Atherton (Cochrane) during the 1960s, a period that can be considered the heyday of automated information system design and evaluation in the United States. I first met Pauline at the 1962 American Documentation Institute annual meeting in North Hollywood, Florida. My company, Westat Research Analysts, had recently been awarded a contract by the U.S. Patent Office to provide statistical support for the design of experiments with automated information retrieval systems. I was asked to attend the meeting to learn more about information retrieval systems and to begin informing others of U.S. Patent Office activities in this area. At one session, Pauline and I questioned a speaker about the research that he presented. Pauline's questions concerned the logic of their approach and mine, the statistical aspects. After the session, she came over to talk to me and we began a professional and personal friendship that continues to this day. During the 1960s, Pauline was involved in several important information-retrieval projects including a series of studies for the American Institute of Physics, a dissertation examining the relevance of retrieved documents, and development and evaluation of an online information-retrieval system. I had the opportunity to work with Pauline and her colleagues an four of those projects and will briefly describe her work in the 1960s.
    Date
    22. 9.1997 19:16:05
  6. Madelung, H.-O.: Subject searching in the social sciences : a comparison of PRECIS and KWIC indexes indexes to newspaper articles (1982) 0.02
    0.020141546 = product of:
      0.08056618 = sum of:
        0.08056618 = weight(_text_:social in 5517) [ClassicSimilarity], result of:
          0.08056618 = score(doc=5517,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.43614143 = fieldWeight in 5517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5517)
      0.25 = coord(1/4)
    
    Abstract
    89 articles from a small, Danish left-wing newspaper were indexed by PRECIS and KWIC. The articles cover a wide range of social science subjects. Controlled test searches in both indexes were carried out by 20 students of library science. The results obtained from this small-scale retrieval test were evaluated by a chi-square test. The PRECIS index led to more correct answers and fewer wrong answers than the KWIC index, i.e. it had both better recall and greater precision. Furthermore, the students were more confident in their judgement of the relevance of retrieved articles in the PRECIS index than in the KWIC index; and they generally favoured the PRECIS index in the subjective judgement they were asked to make
  7. Schabas, A.H.: Postcoordinate retrieval : a comparison of two retrieval languages (1982) 0.02
    0.017264182 = product of:
      0.06905673 = sum of:
        0.06905673 = weight(_text_:social in 1202) [ClassicSimilarity], result of:
          0.06905673 = score(doc=1202,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3738355 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.25 = coord(1/4)
    
    Abstract
    This article reports on a comparison of the postcoordinate retrieval effectiveness of two indexing languages: LCSH and PRECIS. The effect of augmenting each with title words was also studies. The database for the study was over 15.000 UK MARC records. Users returned 5.326 relevant judgements for citations retrieved for 61 SDI profiles, representing a wide variety of subjects. Results are reported in terms of precision and relative recall. Pure/applied sciences data and social science data were analyzed separately. Cochran's significance tests for ratios were used to interpret the findings. Recall emerged as the more important measure discriminating the behavior of the two languages. Addition of title words was found to improve recall of both indexing languages significantly. A direct relationship was observed between recall and exhaustivity. For the social sciences searches, recalls from PRECIS alone and from PRECIS with title words were significantly higher than those from LCSH alone and from LCSH with title words, respectively. Corresponding comparisons for the pure/applied sciences searches revealed no significant differences
  8. Keen, E.M.: Aspects of computer-based indexing languages (1991) 0.01
    0.014787261 = product of:
      0.059149045 = sum of:
        0.059149045 = product of:
          0.11829809 = sum of:
            0.11829809 = weight(_text_:aspects in 5072) [ClassicSimilarity], result of:
              0.11829809 = score(doc=5072,freq=4.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.56498355 = fieldWeight in 5072, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5072)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Comments on the relative rarity of research articles on theoretical aspects of subject indexing in computerised retrieval systems and the predominance of articles on software packages and hardware. Concludes that controlled indexing still has a future but points to major differences from the past
  9. Voorbij, H.: Title keywords and subject descriptors : a comparison of subject search entries of books in the humanities and social sciences (1998) 0.01
    0.014386819 = product of:
      0.057547275 = sum of:
        0.057547275 = weight(_text_:social in 4721) [ClassicSimilarity], result of:
          0.057547275 = score(doc=4721,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3115296 = fieldWeight in 4721, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4721)
      0.25 = coord(1/4)
    
    Abstract
    In order to compare the value of subject descriptors and title keywords as entries to subject searches, two studies were carried out. Both studies concentrated on monographs in the humanities and social sciences, held by the online public access catalogue of the National Library of the Netherlands. In the first study, a comparison was made by subject librarians between the subject descriptors and the title keywords of 475 records. They could express their opinion on a scale from 1 (descriptor is exactly or almost the same as word in title) to 7 (descriptor does not appear in title at all). It was concluded that 37 per cent of the records are considerably enhanced by a subject descriptor, and 49 per cent slightly or considerably enhanced. In the second study, subject librarians performed subject searches using title keywords and subject descriptors on the same topic. The relative recall amounted to 48 per cent and 86 per cent respectively. Failure analysis revealed the reasons why so many records that were found by subject descriptors were not found by title keywords. First, although completely meaningless titles hardly ever appear, the title of a publication does not always offer sufficient clues for title keyword searching. In those cases, descriptors may enhance the record of a publication. A second and even more important task of subject descriptors is controlling the vocabulary. Many relevant titles cannot be retrieved by title keyword searching because of the wide diversity of ways of expressing a topic. Descriptors take away the burden of vocabulary control from the user.
  10. King, D.W.; Bryant, E.C.: ¬The evaluation of information services and products (1971) 0.01
    0.0130702155 = product of:
      0.052280862 = sum of:
        0.052280862 = product of:
          0.104561724 = sum of:
            0.104561724 = weight(_text_:aspects in 4157) [ClassicSimilarity], result of:
              0.104561724 = score(doc=4157,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.4993796 = fieldWeight in 4157, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4157)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Covers the evaluative and control aspects of: dclassification and indexing processes and languages; document screening processes; composition, reproduction, acquisition, storage, and presentation; usersystem interfaces. Also contains brief and lucid primers on user surveys, statistics, sampling methods, and experimental design.
  11. MacCain, K.W.: Descriptor and citation retrieval in the medical behavioral sciences literature : retrieval overlaps and novelty distribution (1989) 0.01
    0.01220762 = product of:
      0.04883048 = sum of:
        0.04883048 = weight(_text_:social in 2290) [ClassicSimilarity], result of:
          0.04883048 = score(doc=2290,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.26434162 = fieldWeight in 2290, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=2290)
      0.25 = coord(1/4)
    
    Abstract
    Search results for nine topics in the medical behavioral sciences are reanalyzed to compare the overall perfor-mance of descriptor and citation search strategies in identifying relevant and novel documents. Overlap per- centages between an aggregate "descriptor-based" database (MEDLINE, EXERPTA MEDICA, PSYCINFO) and an aggregate "citation-based" database (SCISEARCH, SOCIAL SCISEARCH) ranged from 1% to 26%, with a median overlap of 8% relevant retrievals found using both search strategies. For seven topics in which both descriptor and citation strategies produced reasonably substantial retrievals, two patterns of search performance and novelty distribution were observed: (1) where descriptor and citation retrieval showed little overlap, novelty retrieval percentages differed by 17-23% between the two strategies; (2) topics with a relatively high percentage retrieval overlap shoed little difference (1-4%) in descriptor and citation novelty retrieval percentages. These results reflect the varying partial congruence of two literature networks and represent two different types of subject relevance
  12. Mokros, H.B.; Mullins, L.S.; Saracevic, T.: Practice and personhood in professional interaction : social identities and information needs (1995) 0.01
    0.01220762 = product of:
      0.04883048 = sum of:
        0.04883048 = weight(_text_:social in 4080) [ClassicSimilarity], result of:
          0.04883048 = score(doc=4080,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.26434162 = fieldWeight in 4080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=4080)
      0.25 = coord(1/4)
    
  13. Naderi, H.; Rumpler, B.: PERCIRS: a system to combine personalized and collaborative information retrieval (2010) 0.01
    0.011509455 = product of:
      0.04603782 = sum of:
        0.04603782 = weight(_text_:social in 3960) [ClassicSimilarity], result of:
          0.04603782 = score(doc=3960,freq=4.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.24922368 = fieldWeight in 3960, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=3960)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to discuss and test the claim that utilization of the personalization techniques can be valuable to improve the efficiency of collaborative information retrieval (CIR) systems. Design/methodology/approach - A new personalized CIR system, called PERCIRS, is presented based on the user profile similarity calculation (UPSC) formulas. To this aim, the paper proposes several UPSC formulas as well as two techniques to evaluate them. As the proposed CIR system is personalized, it could not be evaluated by Cranfield, like evaluation techniques (e.g. TREC). Hence, this paper proposes a new user-centric mechanism, which enables PERCIRS to be evaluated. This mechanism is generic and can be used to evaluate any other personalized IR system. Findings - The results show that among the proposed UPSC formulas in this paper, the (query-document)-graph based formula is the most effective. After integrating this formula into PERCIRS and comparing it with nine other IR systems, it is concluded that the results of the system are better than the other IR systems. In addition, the paper shows that the complexity of the system is less that the complexity of the other CIR systems. Research limitations/implications - This system asks the users to explicitly rank the returned documents, while explicit ranking is still not widespread enough. However it believes that the users should actively participate in the IR process in order to aptly satisfy their needs to information. Originality/value - The value of this paper lies in combining collaborative and personalized IR, as well as introducing a mechanism which enables the personalized IR system to be evaluated. The proposed evaluation mechanism is very valuable for developers of personalized IR systems. The paper also introduces some significant user profile similarity calculation formulas, and two techniques to evaluate them. These formulas can also be used to find the user's community in the social networks.
    Theme
    Social tagging
  14. Fuhr, N.; Niewelt, B.: ¬Ein Retrievaltest mit automatisch indexierten Dokumenten (1984) 0.01
    0.010983714 = product of:
      0.043934856 = sum of:
        0.043934856 = product of:
          0.08786971 = sum of:
            0.08786971 = weight(_text_:22 in 262) [ClassicSimilarity], result of:
              0.08786971 = score(doc=262,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.5416616 = fieldWeight in 262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=262)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20.10.2000 12:22:23
  15. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.01
    0.010983714 = product of:
      0.043934856 = sum of:
        0.043934856 = product of:
          0.08786971 = sum of:
            0.08786971 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
              0.08786971 = score(doc=6418,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.5416616 = fieldWeight in 6418, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6418)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Online. 22(1998) no.6, S.57-58
  16. Voorhees, E.M.; Harman, D.: Overview of the Sixth Text REtrieval Conference (TREC-6) (2000) 0.01
    0.010983714 = product of:
      0.043934856 = sum of:
        0.043934856 = product of:
          0.08786971 = sum of:
            0.08786971 = weight(_text_:22 in 6438) [ClassicSimilarity], result of:
              0.08786971 = score(doc=6438,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.5416616 = fieldWeight in 6438, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6438)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    11. 8.2001 16:22:19
  17. Dalrymple, P.W.: Retrieval by reformulation in two library catalogs : toward a cognitive model of searching behavior (1990) 0.01
    0.010983714 = product of:
      0.043934856 = sum of:
        0.043934856 = product of:
          0.08786971 = sum of:
            0.08786971 = weight(_text_:22 in 5089) [ClassicSimilarity], result of:
              0.08786971 = score(doc=5089,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.5416616 = fieldWeight in 5089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5089)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:43:54
  18. Saracevic, T.: Individual differences in organizing, searching and retrieving information (1991) 0.01
    0.010456172 = product of:
      0.041824687 = sum of:
        0.041824687 = product of:
          0.083649375 = sum of:
            0.083649375 = weight(_text_:aspects in 3692) [ClassicSimilarity], result of:
              0.083649375 = score(doc=3692,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.39950368 = fieldWeight in 3692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3692)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Synthesises the major findings of several decades of research into the magnitude of individual deffirences in information retrieval related tasks and suggests implications for practice and design. The study is related to a series of studies of human aspects and cognitive decision making in information seeking, searching and retrieving
  19. Keen, E.M.: Some aspects of proximity searching in text retrieval systems (1992) 0.01
    0.010456172 = product of:
      0.041824687 = sum of:
        0.041824687 = product of:
          0.083649375 = sum of:
            0.083649375 = weight(_text_:aspects in 6190) [ClassicSimilarity], result of:
              0.083649375 = score(doc=6190,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.39950368 = fieldWeight in 6190, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6190)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  20. Schirrmeister, N.-P.; Keil, S.: Aufbau einer Infrastruktur für Information Retrieval-Evaluationen (2012) 0.01
    0.010456172 = product of:
      0.041824687 = sum of:
        0.041824687 = product of:
          0.083649375 = sum of:
            0.083649375 = weight(_text_:aspects in 3097) [ClassicSimilarity], result of:
              0.083649375 = score(doc=3097,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.39950368 = fieldWeight in 3097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3097)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Das Projekt "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE) bietet eine Softwareinfrastruktur zur Unterstützung von Information Retrieval-Evaluationen (IR-Evaluationen). Die Infrastruktur basiert auf einem Tool-Kit, das bei GESIS im Rahmen des DFG-Projekts IRM entwickelt wurde. Ziel ist es, ein System zu bieten, das zur Forschung und Lehre am Fachbereich Media für IR-Evaluationen genutzt werden kann. This paper describes some aspects of a project called "Aufbau einer Infrastruktur für Information Retrieval-Evaluationen" (AIIRE). Its goal is to build a software-infrastructure which supports the evaluation of information retrieval algorithms.

Languages

  • e 54
  • d 4
  • f 1
  • More… Less…

Types

  • a 54
  • m 4
  • s 4
  • el 1
  • More… Less…