Search (399 results, page 1 of 20)

  • × theme_ss:"Retrievalstudien"
  1. Schabas, A.H.: Postcoordinate retrieval : a comparison of two retrieval languages (1982) 0.06
    0.05678359 = product of:
      0.15615487 = sum of:
        0.06316024 = weight(_text_:higher in 1202) [ClassicSimilarity], result of:
          0.06316024 = score(doc=1202,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.34821182 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
        0.064219736 = weight(_text_:effect in 1202) [ClassicSimilarity], result of:
          0.064219736 = score(doc=1202,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
        0.017701415 = weight(_text_:of in 1202) [ClassicSimilarity], result of:
          0.017701415 = score(doc=1202,freq=20.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.32781258 = fieldWeight in 1202, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
        0.011073467 = weight(_text_:on in 1202) [ClassicSimilarity], result of:
          0.011073467 = score(doc=1202,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1202)
      0.36363637 = coord(4/11)
    
    Abstract
    This article reports on a comparison of the postcoordinate retrieval effectiveness of two indexing languages: LCSH and PRECIS. The effect of augmenting each with title words was also studies. The database for the study was over 15.000 UK MARC records. Users returned 5.326 relevant judgements for citations retrieved for 61 SDI profiles, representing a wide variety of subjects. Results are reported in terms of precision and relative recall. Pure/applied sciences data and social science data were analyzed separately. Cochran's significance tests for ratios were used to interpret the findings. Recall emerged as the more important measure discriminating the behavior of the two languages. Addition of title words was found to improve recall of both indexing languages significantly. A direct relationship was observed between recall and exhaustivity. For the social sciences searches, recalls from PRECIS alone and from PRECIS with title words were significantly higher than those from LCSH alone and from LCSH with title words, respectively. Corresponding comparisons for the pure/applied sciences searches revealed no significant differences
    Source
    Journal of the American Society for Information Science. 33(1982), S.32-37
  2. VanOot, J.G.: Links and roles in coordinate indexing and searching : an economy study of their use and an evaluation of their effect on relevance and recall (1964) 0.05
    0.0540837 = product of:
      0.1983069 = sum of:
        0.14984606 = weight(_text_:effect in 1896) [ClassicSimilarity], result of:
          0.14984606 = score(doc=1896,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.8192806 = fieldWeight in 1896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.109375 = fieldNorm(doc=1896)
        0.022622751 = weight(_text_:of in 1896) [ClassicSimilarity], result of:
          0.022622751 = score(doc=1896,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.41895083 = fieldWeight in 1896, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.109375 = fieldNorm(doc=1896)
        0.025838088 = weight(_text_:on in 1896) [ClassicSimilarity], result of:
          0.025838088 = score(doc=1896,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.34020463 = fieldWeight in 1896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=1896)
      0.27272728 = coord(3/11)
    
    Imprint
    Chicago, Ill. : American Chemical Society, Division of Chemical Litaratur
  3. Rijsbergen, C.J. van: ¬A test for the separation of relevant and non-relevant documents in experimental retrieval collections (1973) 0.05
    0.049379658 = product of:
      0.13579406 = sum of:
        0.08562632 = weight(_text_:effect in 5002) [ClassicSimilarity], result of:
          0.08562632 = score(doc=5002,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.46816036 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.016689055 = weight(_text_:of in 5002) [ClassicSimilarity], result of:
          0.016689055 = score(doc=5002,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3090647 = fieldWeight in 5002, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.014764623 = weight(_text_:on in 5002) [ClassicSimilarity], result of:
          0.014764623 = score(doc=5002,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.19440265 = fieldWeight in 5002, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=5002)
        0.018714061 = product of:
          0.037428122 = sum of:
            0.037428122 = weight(_text_:22 in 5002) [ClassicSimilarity], result of:
              0.037428122 = score(doc=5002,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.30952093 = fieldWeight in 5002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5002)
          0.5 = coord(1/2)
      0.36363637 = coord(4/11)
    
    Abstract
    Many retrievalexperiments are intended to discover ways of improving performance, taking the results obtained with some particular technique as a baseline. The fact that substantial alterations to a system often have little or no effect on particular collections is puzzling. This may be due to the initially poor seperation of relevant and non-relevant documents. The paper presents a procedure for characterizing this seperation for a collection, which can be used to show whether proposed modifications of the base system are likely to be useful.
    Date
    19. 3.1996 11:22:12
    Source
    Journal of documentation. 29(1973) no.3, S.251-257
  4. Taghva, K.: ¬The effects of noisy data on text retrieval (1994) 0.05
    0.04683636 = product of:
      0.17173332 = sum of:
        0.12109391 = weight(_text_:effect in 7227) [ClassicSimilarity], result of:
          0.12109391 = score(doc=7227,freq=4.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.66207874 = fieldWeight in 7227, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0625 = fieldNorm(doc=7227)
        0.02111017 = weight(_text_:of in 7227) [ClassicSimilarity], result of:
          0.02111017 = score(doc=7227,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.39093933 = fieldWeight in 7227, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=7227)
        0.029529246 = weight(_text_:on in 7227) [ClassicSimilarity], result of:
          0.029529246 = score(doc=7227,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.3888053 = fieldWeight in 7227, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=7227)
      0.27272728 = coord(3/11)
    
    Abstract
    Reports of the results of experiments on query evaluation on the presence of noisy data, in particular, an OCR-generated database and its corresponding 99.8 % correct version are used to process a set of queries to determine the effect the degraded version will have on retrieval. With the set of scientific documents used in the testing, the effect is insignificant. Improves the result by applying an automatic postprocessing system designed to correct the kinds of errors generated by recognition devices
    Source
    Journal of the American Society for Information Science. 45(1994) no.1, S.50-58
  5. Munkelt, J.; Schaer, P.; Lepsky, K.: Towards an IR test collection for the German National Library (2018) 0.04
    0.035918996 = product of:
      0.13170297 = sum of:
        0.014810067 = weight(_text_:of in 4311) [ClassicSimilarity], result of:
          0.014810067 = score(doc=4311,freq=14.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2742677 = fieldWeight in 4311, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4311)
        0.105819434 = weight(_text_:innovations in 4311) [ClassicSimilarity], result of:
          0.105819434 = score(doc=4311,freq=2.0), product of:
            0.23478 = queryWeight, product of:
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.034531306 = queryNorm
            0.45071742 = fieldWeight in 4311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.7990475 = idf(docFreq=133, maxDocs=44218)
              0.046875 = fieldNorm(doc=4311)
        0.011073467 = weight(_text_:on in 4311) [ClassicSimilarity], result of:
          0.011073467 = score(doc=4311,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 4311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4311)
      0.27272728 = coord(3/11)
    
    Abstract
    Automatic content indexing is one of the innovations that are increasingly changing the way libraries work. In theory, it promises a cataloguing service that would hardly be possible with humans in terms of speed, quantity and maybe quality. The German National Library (DNB) has also recognised this potential and is increasingly relying on the automatic indexing of their catalogue content. The DNB took a major step in this direction in 2017, which was announced in two papers. The announcement was rather restrained, but the content of the papers is all the more explosive for the library community: Since September 2017, the DNB has discontinued the intellectual indexing of series Band H and has switched to an automatic process for these series. The subject indexing of online publications (series O) has been purely automatical since 2010; from September 2017, monographs and periodicals published outside the publishing industry and university publications will no longer be indexed by people. This raises the question: What is the quality of the automatic indexing compared to the manual work or in other words to which degree can the automatic indexing replace people without a signi cant drop in regards to quality?
  6. Keen, E.M.: Some aspects of proximity searching in text retrieval systems (1992) 0.03
    0.03480459 = product of:
      0.12761682 = sum of:
        0.08562632 = weight(_text_:effect in 6190) [ClassicSimilarity], result of:
          0.08562632 = score(doc=6190,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.46816036 = fieldWeight in 6190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0625 = fieldNorm(doc=6190)
        0.02111017 = weight(_text_:of in 6190) [ClassicSimilarity], result of:
          0.02111017 = score(doc=6190,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.39093933 = fieldWeight in 6190, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=6190)
        0.02088033 = weight(_text_:on in 6190) [ClassicSimilarity], result of:
          0.02088033 = score(doc=6190,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.27492687 = fieldWeight in 6190, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=6190)
      0.27272728 = coord(3/11)
    
    Abstract
    Describes and evaluates the proximity search facilities in external online systems and in-house retrieval software. Discusses and illustrates capabilities, syntax and circumstances of use. Presents measurements of the overheads required by proximity for storage, record input time and search time. The search strategy narrowing effect of proximity is illustrated by recall and precision test results. Usage and problems lead to a number of design ideas for better implementation: some based on existing Boolean strategies, one on the use of weighted proximity to automatically produce ranked output. A comparison of Boolean, quorum and proximate term pairs distance is included
    Source
    Journal of information science. 18(1992), S.89-98
  7. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.03
    0.034289315 = product of:
      0.12572749 = sum of:
        0.09269322 = weight(_text_:effect in 4393) [ClassicSimilarity], result of:
          0.09269322 = score(doc=4393,freq=6.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.5067985 = fieldWeight in 4393, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
        0.010430659 = weight(_text_:of in 4393) [ClassicSimilarity], result of:
          0.010430659 = score(doc=4393,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.19316542 = fieldWeight in 4393, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
        0.02260362 = weight(_text_:on in 4393) [ClassicSimilarity], result of:
          0.02260362 = score(doc=4393,freq=12.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.29761705 = fieldWeight in 4393, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4393)
      0.27272728 = coord(3/11)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
    Source
    Journal of documentation. 61(2005) no.5, S.623-639
  8. Belkin, N.J.: ¬An overview of results from Rutgers' investigations of interactive information retrieval (1998) 0.03
    0.033854384 = product of:
      0.09309955 = sum of:
        0.053516448 = weight(_text_:effect in 2339) [ClassicSimilarity], result of:
          0.053516448 = score(doc=2339,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.2926002 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.01865893 = weight(_text_:of in 2339) [ClassicSimilarity], result of:
          0.01865893 = score(doc=2339,freq=32.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.34554482 = fieldWeight in 2339, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.009227889 = weight(_text_:on in 2339) [ClassicSimilarity], result of:
          0.009227889 = score(doc=2339,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.121501654 = fieldWeight in 2339, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2339)
        0.011696288 = product of:
          0.023392577 = sum of:
            0.023392577 = weight(_text_:22 in 2339) [ClassicSimilarity], result of:
              0.023392577 = score(doc=2339,freq=2.0), product of:
                0.12092275 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034531306 = queryNorm
                0.19345059 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2339)
          0.5 = coord(1/2)
      0.36363637 = coord(4/11)
    
    Abstract
    Over the last 4 years, the Information Interaction Laboratory at Rutgers' School of communication, Information and Library Studies has performed a series of investigations concerned with various aspects of people's interactions with advanced information retrieval (IR) systems. We have benn especially concerned with understanding not just what people do, and why, and with what effect, but also with what they would like to do, and how they attempt to accomplish it, and with what difficulties. These investigations have led to some quite interesting conclusions about the nature and structure of people's interactions with information, about support for cooperative human-computer interaction in query reformulation, and about the value of visualization of search results for supporting various forms of interaction with information. In this discussion, I give an overview of the research program and its projects, present representative results from the projects, and discuss some implications of these results for support of subject searching in information retrieval systems
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  9. Sullivan, M.V.; Borgman, C.L.: Bibliographic searching by end-users and intermediaries : front-end software vs native DIALOG commands (1988) 0.03
    0.033005092 = product of:
      0.12101866 = sum of:
        0.08932207 = weight(_text_:higher in 3560) [ClassicSimilarity], result of:
          0.08932207 = score(doc=3560,freq=4.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.4924459 = fieldWeight in 3560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.046875 = fieldNorm(doc=3560)
        0.012516791 = weight(_text_:of in 3560) [ClassicSimilarity], result of:
          0.012516791 = score(doc=3560,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.23179851 = fieldWeight in 3560, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3560)
        0.01917981 = weight(_text_:on in 3560) [ClassicSimilarity], result of:
          0.01917981 = score(doc=3560,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.25253648 = fieldWeight in 3560, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3560)
      0.27272728 = coord(3/11)
    
    Abstract
    40 doctoral student were trained to search INSPEC or ERIC on DIALOG using either the Sci-Mate Menu or native commands. In comparison with 20 control subjects for whom a free search was performed by an intermediary, the experiment subjects were no less satisfied with their retrievals, which were fewer in number but higher in precision than the retrievals produced by the intermediaries. Use of the menu interface did not affect quality of retrieval or user satisfaction, although subjects instructed to use native commands required less training time and interacted more with the data bases than did subjects trained on the Sci-Mate Menu. INSPEC subjects placed a higher monetary value on their searches than did ERIC subjects, indicated that they would make more frequent use of ddata bases in the future, and interacted more with the data base.
    Source
    ASIS '88. Information Technology: planning for the next fifty years. Proceedings of the 51st annual meeting of the American Society for Information Science, Atlanta, Georgia, 23-27.10.1988. Vol.25. Ed. by C.L. Borgman and E.Y.H. Pai
  10. Voorhees, E.M.: On test collections for adaptive information retrieval (2008) 0.03
    0.032852538 = product of:
      0.120459296 = sum of:
        0.090820424 = weight(_text_:effect in 2444) [ClassicSimilarity], result of:
          0.090820424 = score(doc=2444,freq=4.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.49655905 = fieldWeight in 2444, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=2444)
        0.018565401 = weight(_text_:of in 2444) [ClassicSimilarity], result of:
          0.018565401 = score(doc=2444,freq=22.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.34381276 = fieldWeight in 2444, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2444)
        0.011073467 = weight(_text_:on in 2444) [ClassicSimilarity], result of:
          0.011073467 = score(doc=2444,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 2444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2444)
      0.27272728 = coord(3/11)
    
    Abstract
    Traditional Cranfield test collections represent an abstraction of a retrieval task that Sparck Jones calls the "core competency" of retrieval: a task that is necessary, but not sufficient, for user retrieval tasks. The abstraction facilitates research by controlling for (some) sources of variability, thus increasing the power of experiments that compare system effectiveness while reducing their cost. However, even within the highly-abstracted case of the Cranfield paradigm, meta-analysis demonstrates that the user/topic effect is greater than the system effect, so experiments must include a relatively large number of topics to distinguish systems' effectiveness. The evidence further suggests that changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments. Defining a new, feasible abstraction for supporting adaptive IR research will require winnowing the list of all possible factors that can affect retrieval behavior to a minimum number of essential factors.
  11. Dimitroff, A.; Wolfram, D.; Volz, A.: Affective response and retrieval performance : analysis of contributing factors (1996) 0.03
    0.032369163 = product of:
      0.11868693 = sum of:
        0.090820424 = weight(_text_:effect in 164) [ClassicSimilarity], result of:
          0.090820424 = score(doc=164,freq=4.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.49655905 = fieldWeight in 164, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=164)
        0.016793035 = weight(_text_:of in 164) [ClassicSimilarity], result of:
          0.016793035 = score(doc=164,freq=18.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3109903 = fieldWeight in 164, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=164)
        0.011073467 = weight(_text_:on in 164) [ClassicSimilarity], result of:
          0.011073467 = score(doc=164,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 164, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=164)
      0.27272728 = coord(3/11)
    
    Abstract
    Describes a study which investigated the affective response of 83 subjects to 2 versions of a hypertext-based bibliographic retrieval system. The objective of the study was to determine if subjects preferred searching a hypertext information retrieval (IR) system via traditional bibliographic links or via an enhanced set of linkages between structured records. The study also examined the utility of using factor analysis to explore subjects' affective responses to searching the 2 hypertext-based IR systems; explored the effect of experience on search outcome; and compared the effect of different types of linkages within the hypertext system. Findings reveal a complex relationship between system and user that is sometimes contradictory. Searchers found the systems to be usable or unusable in different ways indicating that further researchg is needed to isolate to specific features that searchers find frustrating or not in searching structured records via a hypertext-based IR system
  12. Pao, M.L.: Retrieval differences between term and citation indexing (1989) 0.03
    0.03218762 = product of:
      0.118021265 = sum of:
        0.08421365 = weight(_text_:higher in 3566) [ClassicSimilarity], result of:
          0.08421365 = score(doc=3566,freq=2.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.46428242 = fieldWeight in 3566, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
        0.012927286 = weight(_text_:of in 3566) [ClassicSimilarity], result of:
          0.012927286 = score(doc=3566,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.23940048 = fieldWeight in 3566, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
        0.02088033 = weight(_text_:on in 3566) [ClassicSimilarity], result of:
          0.02088033 = score(doc=3566,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.27492687 = fieldWeight in 3566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0625 = fieldNorm(doc=3566)
      0.27272728 = coord(3/11)
    
    Abstract
    A retrieval experiment was conducted to compare on-line searching using terms opposed to citations. This is the first study in which a single data base was used to retrieve two equivalent sets for each query, one using terms found in the bibliographic record to achieve higher recall, and the other using documents. Reports on the use of a second citation searching strategy. Overall, by using both types of search keys, the total recall is increased.
    Source
    Information, knowledge, evolution. Proceedings of the 44th FID congress, Helsinki, 28.8.-1.9.1988. Ed. by S. Koshiala and R. Launo
  13. Hull, D.A.: Stemming algorithms : a case study for detailed evaluation (1996) 0.03
    0.031654097 = product of:
      0.116065025 = sum of:
        0.0184714 = weight(_text_:of in 2999) [ClassicSimilarity], result of:
          0.0184714 = score(doc=2999,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.34207192 = fieldWeight in 2999, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2999)
        0.012919044 = weight(_text_:on in 2999) [ClassicSimilarity], result of:
          0.012919044 = score(doc=2999,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.17010231 = fieldWeight in 2999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2999)
        0.08467458 = weight(_text_:great in 2999) [ClassicSimilarity], result of:
          0.08467458 = score(doc=2999,freq=2.0), product of:
            0.19443816 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.034531306 = queryNorm
            0.43548337 = fieldWeight in 2999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2999)
      0.27272728 = coord(3/11)
    
    Abstract
    The majority of information retrieval experiments are evaluated by measures such as average precision and average recall. Fundamental decisions about the superiority of one retrieval technique over another are made solely on the bases of these measures. We claim that average performance figures need to be validated with a careful statistical analysis and that there is a great deal of additional information that can be uncovered by looking closely at the results of individual queries. This article is a case study of stemming algorithms which describes a number of novel approaches to evaluation and demonstrates their value
    Source
    Journal of the American Society for Information Science. 47(1996) no.1, S.70-84
  14. Wien, C.: Sample sizes and composition : their effect on recall and precision in IR experiments with OPACs (2000) 0.03
    0.031462923 = product of:
      0.115364045 = sum of:
        0.07492303 = weight(_text_:effect in 5368) [ClassicSimilarity], result of:
          0.07492303 = score(doc=5368,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.4096403 = fieldWeight in 5368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5368)
        0.014602924 = weight(_text_:of in 5368) [ClassicSimilarity], result of:
          0.014602924 = score(doc=5368,freq=10.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2704316 = fieldWeight in 5368, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5368)
        0.025838088 = weight(_text_:on in 5368) [ClassicSimilarity], result of:
          0.025838088 = score(doc=5368,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.34020463 = fieldWeight in 5368, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5368)
      0.27272728 = coord(3/11)
    
    Abstract
    This article discusses how samples of records for laboratory IR experiments on OPACs can be constructed so that results obtained from different experiments can be compared. The literature on laboratory IR experiments seems to indicate that the retrieval effectiveness (recall and precision) is affected by the way the samples of records for such experiments are generated. Especially the amount of records and the subject area coverage of the records seems to affect the retrieval effectiveness. This article contains suggestions for the construction of samples for laboratory IR experiments on OPACs and demonstrates that the retrieval effectiveness is affected by different sample size and composition.
  15. Pao, M.L.; Worthen, D.B.: Retrieval effectiveness by semantic and citation searching (1989) 0.03
    0.031120092 = product of:
      0.114107 = sum of:
        0.08932207 = weight(_text_:higher in 2288) [ClassicSimilarity], result of:
          0.08932207 = score(doc=2288,freq=4.0), product of:
            0.18138453 = queryWeight, product of:
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.034531306 = queryNorm
            0.4924459 = fieldWeight in 2288, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.252756 = idf(docFreq=628, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
        0.013711456 = weight(_text_:of in 2288) [ClassicSimilarity], result of:
          0.013711456 = score(doc=2288,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25392252 = fieldWeight in 2288, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
        0.011073467 = weight(_text_:on in 2288) [ClassicSimilarity], result of:
          0.011073467 = score(doc=2288,freq=2.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.14580199 = fieldWeight in 2288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2288)
      0.27272728 = coord(3/11)
    
    Abstract
    A pilot study on the relative retrieval effectiveness of semantic relevance (by terms) and pragmatic relevance (by citations) is reported. A single database has been constructed to provide access by both descriptors and cited references. For each question from a set of queries, two equivalent sets were retrieved. All retrieved items were evaluated by subject experts for relevance to their originating queries. We conclude that there are essentially two types of relevance at work resulting in two different sets of documents. Using both search methods to create a union set is likely to increase recall. Those few retrieved by the intersection of the two methods tend to result in higher precision. Suggestions are made to develop a front-end system to display the overlapping items for higher precision and to manipulate and rank the union set sets retrieved by the two search modes for improved output
    Source
    Journal of the American Society for Information Science. 40(1989), S.226-235
  16. Pirkola, A.; Jarvelin, K.: ¬The effect of anaphor and ellipsis resolution on proximity searching in a text database (1995) 0.03
    0.030601608 = product of:
      0.11220589 = sum of:
        0.07568369 = weight(_text_:effect in 4088) [ClassicSimilarity], result of:
          0.07568369 = score(doc=4088,freq=4.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.41379923 = fieldWeight in 4088, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4088)
        0.018066432 = weight(_text_:of in 4088) [ClassicSimilarity], result of:
          0.018066432 = score(doc=4088,freq=30.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.33457235 = fieldWeight in 4088, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4088)
        0.018455777 = weight(_text_:on in 4088) [ClassicSimilarity], result of:
          0.018455777 = score(doc=4088,freq=8.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.24300331 = fieldWeight in 4088, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4088)
      0.27272728 = coord(3/11)
    
    Abstract
    So far, methods for ellipsis and anaphor resolution have been developed and the effects of anaphor resolution have been analyzed in the context of statistical information retrieval of scientific abstracts. No significant improvements has been observed. Analyzes the effects of ellipsis and anaphor resolution on proximity searching in a full text database. Anaphora and ellipsis are classified on the basis of the type of their correlates / antecedents rather than, as traditional, on the basis of their own linguistic type. The classification differentiates proper names and common nouns of basic words, compound words, and phrases. The study was carried out in a newspaper article database containing 55.000 full text articles. A set of 154 keyword pairs in different categories was created. Human resolution of keyword ellipsis and anaphora was performed to identify sentences and paragraphs which would match proximity searches after resolution. Findings indicate that ellipsis and anaphor resolution is most relevant for proper name phrases and only marginal in the other keyword categories. Therefore the recall effect of restricted resolution of proper name phrases only was analyzed for keyword pairs containing at least 1 proper name phrase. Findings indicate a recall increase of 38.2% in sentence searches, and 28.8% in paragraph searches when proper name ellipsis were resolved. The recall increase was 17.6% sentence searches, and 19.8% in paragraph searches when proper name anaphora were resolved. Some simple and computationally justifiable resolution method might be developed only for proper name phrases to support keyword based full text information retrieval. Discusses elements of such a method
  17. Lazonder, A.W.; Biemans, H.J.A.; Wopereis, I.G.J.H.: Differences between novice and experienced users in searching information on the World Wide Web (2000) 0.03
    0.02924423 = product of:
      0.10722884 = sum of:
        0.064219736 = weight(_text_:effect in 4598) [ClassicSimilarity], result of:
          0.064219736 = score(doc=4598,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 4598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
        0.013711456 = weight(_text_:of in 4598) [ClassicSimilarity], result of:
          0.013711456 = score(doc=4598,freq=12.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.25392252 = fieldWeight in 4598, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
        0.029297642 = weight(_text_:on in 4598) [ClassicSimilarity], result of:
          0.029297642 = score(doc=4598,freq=14.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.38575584 = fieldWeight in 4598, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4598)
      0.27272728 = coord(3/11)
    
    Abstract
    Searching for information on the WWW basically comes down to locating an appropriate Web site and to retrieving relevant information from that site. This study examined the effect of a user's WWW experience on both phases of the search process. 35 students from 2 schools for Dutch pre-university education were observed while performing 3 search tasks. The results indicate that subjects with WWW-experience are more proficient in locating Web sites than are novice WWW-users. The observed differences were ascribed to the experts' superior skills in operating Web search engines. However, on tasks that required subjects to locate information on specific Web sites, the performance of experienced and novice users was equivalent - a result that is in line with hypertext research. Based on these findings, implications for training and supporting students in searching for information on the WWW are identified. Finally, the role of the subjects' level of domain expertise is discussed and directions for future research are proposed
    Source
    Journal of the American Society for Information Science. 51(2000) no.6, S.576-581
  18. Lesk, M.E.; Salton, G.: Relevance assements and retrieval system evaluation (1969) 0.03
    0.028501282 = product of:
      0.1045047 = sum of:
        0.07492303 = weight(_text_:effect in 4151) [ClassicSimilarity], result of:
          0.07492303 = score(doc=4151,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.4096403 = fieldWeight in 4151, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
        0.011311376 = weight(_text_:of in 4151) [ClassicSimilarity], result of:
          0.011311376 = score(doc=4151,freq=6.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.20947541 = fieldWeight in 4151, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
        0.01827029 = weight(_text_:on in 4151) [ClassicSimilarity], result of:
          0.01827029 = score(doc=4151,freq=4.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.24056101 = fieldWeight in 4151, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4151)
      0.27272728 = coord(3/11)
    
    Abstract
    Two widerly used criteria for evaluating the effectiveness of information retrieval systems are, respectively, the recall and the precision. Since the determiniation of these measures is dependent on a distinction between documents which are relevant to a given query and documents which are not relevant to that query, it has sometimes been claimed that an accurate, generally valid evaluation cannot be based on recall and precision measure. A study was made to determine the effect of variations in relevance assesments do not produce significant variations in average recall and precision. It thus appears that properly computed recall and precision data may represent effectiveness indicators which are gemerally valid for many distinct user classes.
  19. Wood, F.; Ford, N.; Walsh, C.: ¬The effect of postings information on search behaviour (1994) 0.03
    0.027325248 = product of:
      0.10019258 = sum of:
        0.064219736 = weight(_text_:effect in 6890) [ClassicSimilarity], result of:
          0.064219736 = score(doc=6890,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 6890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=6890)
        0.016793035 = weight(_text_:of in 6890) [ClassicSimilarity], result of:
          0.016793035 = score(doc=6890,freq=18.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.3109903 = fieldWeight in 6890, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=6890)
        0.01917981 = weight(_text_:on in 6890) [ClassicSimilarity], result of:
          0.01917981 = score(doc=6890,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.25253648 = fieldWeight in 6890, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=6890)
      0.27272728 = coord(3/11)
    
    Abstract
    How postings information is used for inverted file searching was investigated by comparing searches, made by postgraduate students at the Dept. of Information Studies, of the LISA database on CD-ROM with and without postings information. Performance (the number of relevant references, precision and recall) was not significantly different but searches with postings information took more time, and more sets were viewed, than in searches without postings. Postings information was used to make decisions to narrow or broaden the search; to view or print the references. The same techniques were used to amend searches whether or not postings information was available. Users decided that a search was satisfactory on the basis of the search results, and consequently many searches done without postings were still considered satisfactory. However, searchers thought that the lack of postings information had affected 90% of their searches. Differences in search performance and searching behaviour were found in participants who were shown to have different learning styles using the Witkin's Embedded Figures test and the Lancaster Short Inventory of Approaches to Learning Test. These differences were, in part, explained by the differences in behaviour indicated by their learning styles
    Source
    Journal of information science. 20(1994) no.1, S.29-40
  20. Hallet, K.S.: Separate but equal? : A system comparison study of MEDLINE's controlled vocabulary MeSH (1998) 0.03
    0.02706332 = product of:
      0.09923217 = sum of:
        0.064219736 = weight(_text_:effect in 3553) [ClassicSimilarity], result of:
          0.064219736 = score(doc=3553,freq=2.0), product of:
            0.18289955 = queryWeight, product of:
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.034531306 = queryNorm
            0.35112026 = fieldWeight in 3553, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.29663 = idf(docFreq=601, maxDocs=44218)
              0.046875 = fieldNorm(doc=3553)
        0.015832627 = weight(_text_:of in 3553) [ClassicSimilarity], result of:
          0.015832627 = score(doc=3553,freq=16.0), product of:
            0.053998582 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.034531306 = queryNorm
            0.2932045 = fieldWeight in 3553, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3553)
        0.01917981 = weight(_text_:on in 3553) [ClassicSimilarity], result of:
          0.01917981 = score(doc=3553,freq=6.0), product of:
            0.07594867 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.034531306 = queryNorm
            0.25253648 = fieldWeight in 3553, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3553)
      0.27272728 = coord(3/11)
    
    Abstract
    Reports results of a study to test the effect of controlled vocabulary search feature implementation on 2 online systems. Specifically, the study examined retrieval rates using 4 unique controlled vocabulary search features (Explode, major descriptor, descriptor, subheadings). 2 questions were addressed; what, if any, are the general differences between controlled vocabulary system implementations in DIALOG and Ovid; and what, if any are the impacts of each on the differing controlled vocabulary search features upon retrieval rates? Each search feature was applied to to 9 search queries obtained from a medical reference librarian. The same queires were searched in the complete MEDLINE file on the DIALOG and Ovid online host systems. The unique records (those records retrieved in only 1 of the 2 systems) were identified and analyzed. DIALOG produced equal or more records than Ovid in nearly 20% of the queries. Concludes than users need to be aware of system specific designs that may require differing input strategies across different systems for the same unique controlled vocabulary search features. Making recommendations and suggestions for future research
    Source
    Bulletin of the Medical Library Association. 86(1998) no.4, S.491-495

Languages

Types

  • a 371
  • s 14
  • m 8
  • el 6
  • r 4
  • x 2
  • d 1
  • p 1
  • More… Less…