Search (88 results, page 1 of 5)

  • × theme_ss:"Retrievalstudien"
  1. Tomaiuolo, N.G.; Parker, J.: Maximizing relevant retrieval : keyword and natural language searching (1998) 0.11
    0.11072004 = product of:
      0.22144008 = sum of:
        0.22144008 = sum of:
          0.123249 = weight(_text_:language in 6418) [ClassicSimilarity], result of:
            0.123249 = score(doc=6418,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.60685337 = fieldWeight in 6418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.109375 = fieldNorm(doc=6418)
          0.098191075 = weight(_text_:22 in 6418) [ClassicSimilarity], result of:
            0.098191075 = score(doc=6418,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.5416616 = fieldWeight in 6418, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6418)
      0.5 = coord(1/2)
    
    Source
    Online. 22(1998) no.6, S.57-58
  2. ¬The Fifth Text Retrieval Conference (TREC-5) (1997) 0.06
    0.063268594 = product of:
      0.12653719 = sum of:
        0.12653719 = sum of:
          0.07042801 = weight(_text_:language in 3087) [ClassicSimilarity], result of:
            0.07042801 = score(doc=3087,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.34677336 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
          0.056109186 = weight(_text_:22 in 3087) [ClassicSimilarity], result of:
            0.056109186 = score(doc=3087,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.30952093 = fieldWeight in 3087, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3087)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the 5th TREC-confrerence held in Gaithersburgh, Maryland, Nov 20-22, 1996. Aim of the conference was discussion on retrieval techniques for large test collections. Different research groups used different techniques, such as automated thesauri, term weighting, natural language techniques, relevance feedback and advanced pattern matching, for information retrieval from the same large database. This procedure makes it possible to compare the results. The proceedings include papers, tables of the system results, and brief system descriptions including timing and storage information
  3. ¬The Eleventh Text Retrieval Conference, TREC 2002 (2003) 0.06
    0.063268594 = product of:
      0.12653719 = sum of:
        0.12653719 = sum of:
          0.07042801 = weight(_text_:language in 4049) [ClassicSimilarity], result of:
            0.07042801 = score(doc=4049,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.34677336 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
          0.056109186 = weight(_text_:22 in 4049) [ClassicSimilarity], result of:
            0.056109186 = score(doc=4049,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.30952093 = fieldWeight in 4049, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4049)
      0.5 = coord(1/2)
    
    Abstract
    Proceedings of the llth TREC-conference held in Gaithersburg, Maryland (USA), November 19-22, 2002. Aim of the conference was discussion an retrieval and related information-seeking tasks for large test collection. 93 research groups used different techniques, for information retrieval from the same large database. This procedure makes it possible to compare the results. The tasks are: Cross-language searching, filtering, interactive searching, searching for novelty, question answering, searching for video shots, and Web searching.
  4. Petrelli, D.: On the role of user-centred evaluation in the advancement of interactive information retrieval (2008) 0.06
    0.0556544 = product of:
      0.1113088 = sum of:
        0.1113088 = sum of:
          0.07624056 = weight(_text_:language in 2026) [ClassicSimilarity], result of:
            0.07624056 = score(doc=2026,freq=6.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.3753932 = fieldWeight in 2026, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
          0.03506824 = weight(_text_:22 in 2026) [ClassicSimilarity], result of:
            0.03506824 = score(doc=2026,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.19345059 = fieldWeight in 2026, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2026)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses the role of user-centred evaluations as an essential method for researching interactive information retrieval. It draws mainly on the work carried out during the Clarity Project where different user-centred evaluations were run during the lifecycle of a cross-language information retrieval system. The iterative testing was not only instrumental to the development of a usable system, but it enhanced our knowledge of the potential, impact, and actual use of cross-language information retrieval technology. Indeed the role of the user evaluation was dual: by testing a specific prototype it was possible to gain a micro-view and assess the effectiveness of each component of the complex system; by cumulating the result of all the evaluations (in total 43 people were involved) it was possible to build a macro-view of how cross-language retrieval would impact on users and their tasks. By showing the richness of results that can be acquired, this paper aims at stimulating researchers into considering user-centred evaluations as a flexible, adaptable and comprehensive technique for investigating non-traditional information access systems.
    Source
    Information processing and management. 44(2008) no.1, S.22-38
  5. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.06
    0.05536002 = product of:
      0.11072004 = sum of:
        0.11072004 = sum of:
          0.0616245 = weight(_text_:language in 5001) [ClassicSimilarity], result of:
            0.0616245 = score(doc=5001,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.30342668 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.049095538 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.049095538 = score(doc=5001,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.5 = coord(1/2)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  6. Sievert, M.E.; McKinin, E.J.: Why full-text misses some relevant documents : an analysis of documents not retrieved by CCML or MEDIS (1989) 0.05
    0.047451444 = product of:
      0.09490289 = sum of:
        0.09490289 = sum of:
          0.052821 = weight(_text_:language in 3564) [ClassicSimilarity], result of:
            0.052821 = score(doc=3564,freq=2.0), product of:
              0.2030952 = queryWeight, product of:
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.051766515 = queryNorm
              0.26008 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9232929 = idf(docFreq=2376, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
          0.04208189 = weight(_text_:22 in 3564) [ClassicSimilarity], result of:
            0.04208189 = score(doc=3564,freq=2.0), product of:
              0.18127751 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051766515 = queryNorm
              0.23214069 = fieldWeight in 3564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3564)
      0.5 = coord(1/2)
    
    Abstract
    Searches conducted as part of the MEDLINE/Full-Text Research Project revealed that the full-text data bases of clinical medical journal articles (CCML (Comprehensive Core Medical Library) from BRS Information Technologies, and MEDIS from Mead Data Central) did not retrieve all the relevant citations. An analysis of the data indicated that 204 relevant citations were retrieved only by MEDLINE. A comparison of the strategies used on the full-text data bases with the text of the articles of these 204 citations revealed that 2 reasons contributed to these failure. The searcher often constructed a restrictive strategy which resulted in the loss of relevant documents; and as in other kinds of retrieval, the problems of natural language caused the loss of relevant documents.
    Date
    9. 1.1996 10:22:31
  7. Airio, E.: Who benefits from CLIR in web retrieval? (2008) 0.04
    0.04175867 = product of:
      0.08351734 = sum of:
        0.08351734 = product of:
          0.16703469 = sum of:
            0.16703469 = weight(_text_:language in 2342) [ClassicSimilarity], result of:
              0.16703469 = score(doc=2342,freq=20.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.8224453 = fieldWeight in 2342, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2342)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The aim of the current paper is to test whether query translation is beneficial in web retrieval. Design/methodology/approach - The language pairs were Finnish-Swedish, English-German and Finnish-French. A total of 12-18 participants were recruited for each language pair. Each participant performed four retrieval tasks. The author's aim was to compare the performance of the translated queries with that of the target language queries. Thus, the author asked participants to formulate a source language query and a target language query for each task. The source language queries were translated into the target language utilizing a dictionary-based system. In English-German, also machine translation was utilized. The author used Google as the search engine. Findings - The results differed depending on the language pair. The author concluded that the dictionary coverage had an effect on the results. On average, the results of query-translation were better than in the traditional laboratory tests. Originality/value - This research shows that query translation in web is beneficial especially for users with moderate and non-active language skills. This is valuable information for developers of cross-language information retrieval systems.
  8. Davis, M.W.: On the effective use of large parallel corpora in cross-language text retrieval (1998) 0.04
    0.037350092 = product of:
      0.074700184 = sum of:
        0.074700184 = product of:
          0.14940037 = sum of:
            0.14940037 = weight(_text_:language in 6302) [ClassicSimilarity], result of:
              0.14940037 = score(doc=6302,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.7356174 = fieldWeight in 6302, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6302)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  9. Hansen, P.; Karlgren, J.: Effects of foreign language and task scenario on relevance assessment (2005) 0.03
    0.031125076 = product of:
      0.062250152 = sum of:
        0.062250152 = product of:
          0.124500304 = sum of:
            0.124500304 = weight(_text_:language in 4393) [ClassicSimilarity], result of:
              0.124500304 = score(doc=4393,freq=16.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.6130145 = fieldWeight in 4393, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4393)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper aims to investigate how readers assess relevance of retrieved documents in a foreign language they know well compared with their native language, and whether work-task scenario descriptions have effect on the assessment process. Design/methodology/approach - Queries, test collections, and relevance assessments were used from the 2002 Interactive CLEF. Swedish first-language speakers, fluent in English, were given simulated information-seeking scenarios and presented with retrieval results in both languages. Twenty-eight subjects in four groups were asked to rate the retrieved text documents by relevance. A two-level work-task scenario description framework was developed and applied to facilitate the study of context effects on the assessment process. Findings - Relevance assessment takes longer in a foreign language than in the user first language. The quality of assessments by comparison with pre-assessed results is inferior to those made in the users' first language. Work-task scenario descriptions had an effect on the assessment process, both by measured access time and by self-report by subjects. However, effects on results by traditional relevance ranking were detectable. This may be an argument for extending the traditional IR experimental topical relevance measures to cater for context effects. Originality/value - An extended two-level work-task scenario description framework was developed and applied. Contextual aspects had an effect on the relevance assessment process. English texts took longer to assess than Swedish and were assessed less well, especially for the most difficult queries. The IR research field needs to close this gap and to design information access systems with users' language competence in mind.
  10. Bernstein, L.M.; Williamson, R.E.: Testing of a natural language retrieval system for a full text knowledge base (1984) 0.03
    0.03081225 = product of:
      0.0616245 = sum of:
        0.0616245 = product of:
          0.123249 = sum of:
            0.123249 = weight(_text_:language in 1803) [ClassicSimilarity], result of:
              0.123249 = score(doc=1803,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.60685337 = fieldWeight in 1803, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1803)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Hersh, W.R.; Hickam, D.H.: ¬An evaluation of interactive Boolean and natural language searching with an online medical textbook (1995) 0.03
    0.03081225 = product of:
      0.0616245 = sum of:
        0.0616245 = product of:
          0.123249 = sum of:
            0.123249 = weight(_text_:language in 2651) [ClassicSimilarity], result of:
              0.123249 = score(doc=2651,freq=8.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.60685337 = fieldWeight in 2651, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2651)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Few studies have compared the interactive use of Boolean and natural language search systems. Studies the use of 3 retrieval systems by senior medical students searching on queries generated by actual physicians in a clinical setting. The searchers were randomized to search on 2 or 3 different retrieval systems: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall or precision among the 3 systems. Likewise, there is no user preference for any system over the other. The study revealed problems with traditional measures of retrieval evaluation when applied to the interactive search setting
  12. Feldman, S.: Testing natural language : comparing DIALOG, TARGET, and DR-LINK (1996) 0.03
    0.030496225 = product of:
      0.06099245 = sum of:
        0.06099245 = product of:
          0.1219849 = sum of:
            0.1219849 = weight(_text_:language in 7463) [ClassicSimilarity], result of:
              0.1219849 = score(doc=7463,freq=6.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.60062915 = fieldWeight in 7463, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7463)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Compares online searching of DIALOG (a traditional Boolean system), TARGET (a relevance ranking system) and DR-LINK (an advanced intelligent text processing system), in order to establish the differing strengths of traditional and natural language processing search systems. Details example search queries used in comparison and how each of the systems performed. Considers the implications of the findings for professional information searchers and end users. Natural language processing systems are useful because they develop an wider understanding of queries that use of traditional systems may not
  13. Bhattacharyya, K.: ¬The effectiveness of natural language in science indexing and retrieval (1974) 0.03
    0.029527843 = product of:
      0.059055686 = sum of:
        0.059055686 = product of:
          0.11811137 = sum of:
            0.11811137 = weight(_text_:language in 2628) [ClassicSimilarity], result of:
              0.11811137 = score(doc=2628,freq=10.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.5815567 = fieldWeight in 2628, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2628)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper examines the implications of the findings of evaluative tests regarding the retrieval performance of natural language in various subject fields. It suggests parallel investigations into the structure of natural language, with particular reference to terminology, as used in the different branches of basic science. The criteria for defining the terminological consistency of a subject are formulated and a measure suggested for determining the degree of terminological consistency. The terminological and information structures of specific disciplines such as, chemistry, physics, botany, zoology, and geology; the circumstances in which terms originate; and the efforts made by the international scientific community to standardize the terminology in their respective disciplines - are examined in detail. This investigation shows why and how an artificially created scientific language finds it impossible to keep pace with current developments and thus points to the source of strength of natural language
  14. Strzalkowski, T.; Perez-Carballo, J.: Natural language information retrieval : TREC-4 report (1996) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 3211) [ClassicSimilarity], result of:
              0.105642 = score(doc=3211,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 3211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Cleverdon, C.W.; Mills, J.: ¬The testing of index language devices (1963) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 577) [ClassicSimilarity], result of:
              0.105642 = score(doc=577,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Davis, M.: New experiments in cross-language text retrieval at NMSU's computing research lab (1997) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 3111) [ClassicSimilarity], result of:
              0.105642 = score(doc=3111,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 3111, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Sheridan, P.; Ballerini, J.P.; Schäuble, P.: Building a large multilingual test collection from comparable news documents (1998) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 6298) [ClassicSimilarity], result of:
              0.105642 = score(doc=6298,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 6298, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6298)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cross-language information retrieval. Ed.: G. Grefenstette
  18. Hiemstra, D.; Kraaij, W.: ¬A language-modeling approach to TREC (2005) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 5091) [ClassicSimilarity], result of:
              0.105642 = score(doc=5091,freq=2.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 5091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. López-Ostenero, F.; Peinado, V.; Gonzalo, J.; Verdejo, F.: Interactive question answering : Is Cross-Language harder than monolingual searching? (2008) 0.03
    0.0264105 = product of:
      0.052821 = sum of:
        0.052821 = product of:
          0.105642 = sum of:
            0.105642 = weight(_text_:language in 2023) [ClassicSimilarity], result of:
              0.105642 = score(doc=2023,freq=8.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.52016 = fieldWeight in 2023, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2023)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Is Cross-Language answer finding harder than Monolingual answer finding for users? In this paper we provide initial quantitative and qualitative evidence to answer this question. In our study, which involves 16 users searching questions under four different system conditions, we find that interactive cross-language answer finding is not substantially harder (in terms of accuracy) than its monolingual counterpart, using general purpose Machine Translation systems and standard Information Retrieval machinery, although it takes more time. We have also seen that users need more context to provide accurate answers (full documents) than what is usually considered by systems (paragraphs or passages). Finally, we also discuss the limitations of standard evaluation methodologies for interactive Information Retrieval experiments in the case of cross-language question answering.
  20. Feng, S.: ¬A comparative study of indexing languages in single and multidatabase searching (1989) 0.02
    0.02490006 = product of:
      0.04980012 = sum of:
        0.04980012 = product of:
          0.09960024 = sum of:
            0.09960024 = weight(_text_:language in 2494) [ClassicSimilarity], result of:
              0.09960024 = score(doc=2494,freq=4.0), product of:
                0.2030952 = queryWeight, product of:
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.051766515 = queryNorm
                0.4904116 = fieldWeight in 2494, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9232929 = idf(docFreq=2376, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2494)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An experiment was conducted using 3 data bases in library and information science - Library and Information Science Abstracts (LISA), Information Science Abstracts and ERIC - to investigate some of the main factors affecting on-line searching: effectiveness of search vocabularies, combinations of fields searched, and overlaps among databases. Natural language, controlled vocabulary and a mixture of natural language and controlled terms were tested using different fields of bibliographic records. Also discusses a comparative evaluation of single and multi-data base searching, measuring the overlap among data bases and their influence upon on-line searching.

Languages

  • e 80
  • d 5
  • f 1
  • m 1
  • More… Less…

Types

  • a 79
  • s 8
  • m 5
  • More… Less…