Search (82 results, page 1 of 5)

  • × theme_ss:"Retrievalalgorithmen"
  1. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.05
    0.051986672 = product of:
      0.103973344 = sum of:
        0.103973344 = sum of:
          0.0651857 = weight(_text_:c in 1319) [ClassicSimilarity], result of:
            0.0651857 = score(doc=1319,freq=6.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.46207014 = fieldWeight in 1319, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
          0.038787637 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
            0.038787637 = score(doc=1319,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.2708308 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  2. Faloutsos, C.: Signature files (1992) 0.04
    0.043670066 = product of:
      0.08734013 = sum of:
        0.08734013 = sum of:
          0.04301141 = weight(_text_:c in 3499) [ClassicSimilarity], result of:
            0.04301141 = score(doc=3499,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.3048872 = fieldWeight in 3499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0625 = fieldNorm(doc=3499)
          0.044328727 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
            0.044328727 = score(doc=3499,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.30952093 = fieldWeight in 3499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3499)
      0.5 = coord(1/2)
    
    Date
    7. 5.1999 15:22:48
  3. Information retrieval : data structures and algorithms (1992) 0.03
    0.03470791 = sum of:
      0.0078257825 = product of:
        0.046954695 = sum of:
          0.046954695 = weight(_text_:authors in 3495) [ClassicSimilarity], result of:
            0.046954695 = score(doc=3495,freq=2.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.25184128 = fieldWeight in 3495, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3495)
        0.16666667 = coord(1/6)
      0.026882129 = product of:
        0.053764258 = sum of:
          0.053764258 = weight(_text_:c in 3495) [ClassicSimilarity], result of:
            0.053764258 = score(doc=3495,freq=8.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.381109 = fieldWeight in 3495, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3495)
        0.5 = coord(1/2)
    
    Abstract
    The book consists of separate chapters by some 20 different authors. It covers many of the information retrieval algorithms, including methods of file organization, file search and access, and query processing
    Content
    An edited volume containing data structures and algorithms for information retrieval including a disk with examples written in C. for prgrammers and students interested in parsing text, automated indexing, its the first collection in book form of the basic data structures and algorithms that are critical to the storage and retrieval of documents. ------------------Enthält die Kapitel: FRAKES, W.B.: Introduction to information storage and retrieval systems; BAEZA-YATES, R.S.: Introduction to data structures and algorithms related to information retrieval; HARMAN, D. u.a.: Inverted files; FALOUTSOS, C.: Signature files; GONNET, G.H. u.a.: New indices for text: PAT trees and PAT arrays; FORD, D.A. u. S. CHRISTODOULAKIS: File organizations for optical disks; FOX, C.: Lexical analysis and stoplists; FRAKES, W.B.: Stemming algorithms; SRINIVASAN, P.: Thesaurus construction; BAEZA-YATES, R.A.: String searching algorithms; HARMAN, D.: Relevance feedback and other query modification techniques; WARTIK, S.: Boolean operators; WARTIK, S. u.a.: Hashing algorithms; HARMAN, D.: Ranking algorithms; FOX, E.: u.a.: Extended Boolean models; RASMUSSEN, E.: Clustering algorithms; HOLLAAR, L.: Special-purpose hardware for information retrieval; STANFILL, C.: Parallel information retrieval algorithms
  4. Shiri, A.A.; Revie, C.: Query expansion behavior within a thesaurus-enhanced search environment : a user-centered evaluation (2006) 0.03
    0.032861266 = product of:
      0.06572253 = sum of:
        0.06572253 = sum of:
          0.038017076 = weight(_text_:c in 56) [ClassicSimilarity], result of:
            0.038017076 = score(doc=56,freq=4.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.2694848 = fieldWeight in 56, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
          0.027705455 = weight(_text_:22 in 56) [ClassicSimilarity], result of:
            0.027705455 = score(doc=56,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.19345059 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
      0.5 = coord(1/2)
    
    Abstract
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.
    Date
    22. 7.2006 16:32:43
  5. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.03
    0.03275255 = product of:
      0.0655051 = sum of:
        0.0655051 = sum of:
          0.03225856 = weight(_text_:c in 2419) [ClassicSimilarity], result of:
            0.03225856 = score(doc=2419,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.22866541 = fieldWeight in 2419, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=2419)
          0.033246543 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
            0.033246543 = score(doc=2419,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.23214069 = fieldWeight in 2419, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2419)
      0.5 = coord(1/2)
    
    Date
    16.11.2008 16:22:48
  6. Biskri, I.; Rompré, L.: Using association rules for query reformulation (2012) 0.03
    0.029410074 = sum of:
      0.013280794 = product of:
        0.079684764 = sum of:
          0.079684764 = weight(_text_:authors in 92) [ClassicSimilarity], result of:
            0.079684764 = score(doc=92,freq=4.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.42738882 = fieldWeight in 92, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=92)
        0.16666667 = coord(1/6)
      0.01612928 = product of:
        0.03225856 = sum of:
          0.03225856 = weight(_text_:c in 92) [ClassicSimilarity], result of:
            0.03225856 = score(doc=92,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.22866541 = fieldWeight in 92, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=92)
        0.5 = coord(1/2)
    
    Abstract
    In this paper the authors will present research on the combination of two methods of data mining: text classification and maximal association rules. Text classification has been the focus of interest of many researchers for a long time. However, the results take the form of lists of words (classes) that people often do not know what to do with. The use of maximal association rules induced a number of advantages: (1) the detection of dependencies and correlations between the relevant units of information (words) of different classes, (2) the extraction of hidden knowledge, often relevant, from a large volume of data. The authors will show how this combination can improve the process of information retrieval.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  7. Habernal, I.; Konopík, M.; Rohlík, O.: Question answering (2012) 0.03
    0.025520219 = sum of:
      0.009390939 = product of:
        0.056345634 = sum of:
          0.056345634 = weight(_text_:authors in 101) [ClassicSimilarity], result of:
            0.056345634 = score(doc=101,freq=2.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.30220953 = fieldWeight in 101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.16666667 = coord(1/6)
      0.01612928 = product of:
        0.03225856 = sum of:
          0.03225856 = weight(_text_:c in 101) [ClassicSimilarity], result of:
            0.03225856 = score(doc=101,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.22866541 = fieldWeight in 101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.5 = coord(1/2)
    
    Abstract
    Question Answering is an area of information retrieval with the added challenge of applying sophisticated techniques to identify the complex syntactic and semantic relationships present in text in order to provide a more sophisticated and satisfactory response to the user's information needs. For this reason, the authors see question answering as the next step beyond standard information retrieval. In this chapter state of the art question answering is covered focusing on providing an overview of systems, techniques and approaches that are likely to be employed in the next generations of search engines. Special attention is paid to question answering using the World Wide Web as the data source and to question answering exploiting the possibilities of Semantic Web. Considerations about the current issues and prospects for promising future research are also provided.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  8. Soulier, L.; Jabeur, L.B.; Tamine, L.; Bahsoun, W.: On ranking relevant entities in heterogeneous networks using a language-based model (2013) 0.02
    0.024920058 = sum of:
      0.011067329 = product of:
        0.06640397 = sum of:
          0.06640397 = weight(_text_:authors in 664) [ClassicSimilarity], result of:
            0.06640397 = score(doc=664,freq=4.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.35615736 = fieldWeight in 664, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=664)
        0.16666667 = coord(1/6)
      0.013852728 = product of:
        0.027705455 = sum of:
          0.027705455 = weight(_text_:22 in 664) [ClassicSimilarity], result of:
            0.027705455 = score(doc=664,freq=2.0), product of:
              0.14321722 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.040897828 = queryNorm
              0.19345059 = fieldWeight in 664, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=664)
        0.5 = coord(1/2)
    
    Abstract
    A new challenge, accessing multiple relevant entities, arises from the availability of linked heterogeneous data. In this article, we address more specifically the problem of accessing relevant entities, such as publications and authors within a bibliographic network, given an information need. We propose a novel algorithm, called BibRank, that estimates a joint relevance of documents and authors within a bibliographic network. This model ranks each type of entity using a score propagation algorithm with respect to the query topic and the structure of the underlying bi-type information entity network. Evidence sources, namely content-based and network-based scores, are both used to estimate the topical similarity between connected entities. For this purpose, authorship relationships are analyzed through a language model-based score on the one hand and on the other hand, non topically related entities of the same type are detected through marginal citations. The article reports the results of experiments using the Bibrank algorithm for an information retrieval task. The CiteSeerX bibliographic data set forms the basis for the topical query automatic generation and evaluation. We show that a statistically significant improvement over closely related ranking models is achieved.
    Date
    22. 3.2013 19:34:49
  9. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.02
    0.022164363 = product of:
      0.044328727 = sum of:
        0.044328727 = product of:
          0.08865745 = sum of:
            0.08865745 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.08865745 = score(doc=402,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  10. Salton, G.; Buckley, C.: Parallel text search methods (1988) 0.02
    0.021505704 = product of:
      0.04301141 = sum of:
        0.04301141 = product of:
          0.08602282 = sum of:
            0.08602282 = weight(_text_:c in 404) [ClassicSimilarity], result of:
              0.08602282 = score(doc=404,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.6097744 = fieldWeight in 404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.125 = fieldNorm(doc=404)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Dannenberg, R.B.; Birmingham, W.P.; Pardo, B.; Hu, N.; Meek, C.; Tzanetakis, G.: ¬A comparative evaluation of search techniques for query-by-humming using the MUSART testbed (2007) 0.02
    0.021266848 = sum of:
      0.0078257825 = product of:
        0.046954695 = sum of:
          0.046954695 = weight(_text_:authors in 269) [ClassicSimilarity], result of:
            0.046954695 = score(doc=269,freq=2.0), product of:
              0.1864456 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.040897828 = queryNorm
              0.25184128 = fieldWeight in 269, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=269)
        0.16666667 = coord(1/6)
      0.013441064 = product of:
        0.026882129 = sum of:
          0.026882129 = weight(_text_:c in 269) [ClassicSimilarity], result of:
            0.026882129 = score(doc=269,freq=2.0), product of:
              0.14107318 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.040897828 = queryNorm
              0.1905545 = fieldWeight in 269, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0390625 = fieldNorm(doc=269)
        0.5 = coord(1/2)
    
    Abstract
    Query-by-humming systems offer content-based searching for melodies and require no special musical training or knowledge. Many such systems have been built, but there has not been much useful evaluation and comparison in the literature due to the lack of shared databases and queries. The MUSART project testbed allows various search algorithms to be compared using a shared framework that automatically runs experiments and summarizes results. Using this testbed, the authors compared algorithms based on string alignment, melodic contour matching, a hidden Markov model, n-grams, and CubyHum. Retrieval performance is very sensitive to distance functions and the representation of pitch and rhythm, which raises questions about some previously published conclusions. Some algorithms are particularly sensitive to the quality of queries. Our queries, which are taken from human subjects in a realistic setting, are quite difficult, especially for n-gram models. Finally, simulations on query-by-humming performance as a function of database size indicate that retrieval performance falls only slowly as the database size increases.
  12. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.077575274 = score(doc=2134,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:32:22
  13. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.02
    0.019393818 = product of:
      0.038787637 = sum of:
        0.038787637 = product of:
          0.077575274 = sum of:
            0.077575274 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.077575274 = score(doc=3445,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    25. 8.2005 17:42:22
  14. Daniowicz, C.; Baliski, J.: Document ranking based upon Markov chains (2001) 0.02
    0.01881749 = product of:
      0.03763498 = sum of:
        0.03763498 = product of:
          0.07526996 = sum of:
            0.07526996 = weight(_text_:c in 5388) [ClassicSimilarity], result of:
              0.07526996 = score(doc=5388,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.5335526 = fieldWeight in 5388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5388)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.02
    0.016623272 = product of:
      0.033246543 = sum of:
        0.033246543 = product of:
          0.06649309 = sum of:
            0.06649309 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.06649309 = score(doc=58,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:44
  16. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.02
    0.016623272 = product of:
      0.033246543 = sum of:
        0.033246543 = product of:
          0.06649309 = sum of:
            0.06649309 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.06649309 = score(doc=2051,freq=2.0), product of:
                0.14321722 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040897828 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:56
  17. Belkin, N.J.; Cool, C.; Koenemann, J.; Ng, K.B.; Park, S.: Using relevance feedback and ranking in interactive searching (1996) 0.02
    0.01612928 = product of:
      0.03225856 = sum of:
        0.03225856 = product of:
          0.06451712 = sum of:
            0.06451712 = weight(_text_:c in 7588) [ClassicSimilarity], result of:
              0.06451712 = score(doc=7588,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.45733082 = fieldWeight in 7588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7588)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Salton, G.; Buckley, C.: Term-weighting approaches in automatic text retrieval (1988) 0.02
    0.01612928 = product of:
      0.03225856 = sum of:
        0.03225856 = product of:
          0.06451712 = sum of:
            0.06451712 = weight(_text_:c in 1938) [ClassicSimilarity], result of:
              0.06451712 = score(doc=1938,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.45733082 = fieldWeight in 1938, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1938)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Cole, C.: Intelligent information retrieval: diagnosing information need : Part I: the theoretical framework for developing an intelligent IR tool (1998) 0.02
    0.01612928 = product of:
      0.03225856 = sum of:
        0.03225856 = product of:
          0.06451712 = sum of:
            0.06451712 = weight(_text_:c in 6431) [ClassicSimilarity], result of:
              0.06451712 = score(doc=6431,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.45733082 = fieldWeight in 6431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Cole, C.: Intelligent information retrieval: diagnosing information need : Part II: uncertainty expansion in a prototype of a diagnostic IR tool (1998) 0.02
    0.01612928 = product of:
      0.03225856 = sum of:
        0.03225856 = product of:
          0.06451712 = sum of:
            0.06451712 = weight(_text_:c in 6432) [ClassicSimilarity], result of:
              0.06451712 = score(doc=6432,freq=2.0), product of:
                0.14107318 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.040897828 = queryNorm
                0.45733082 = fieldWeight in 6432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6432)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Languages

  • e 74
  • d 7
  • m 1
  • More… Less…

Types

  • a 74
  • m 6
  • s 3
  • el 2
  • r 1
  • More… Less…