Search (356 results, page 1 of 18)

  • × theme_ss:"Retrievalalgorithmen"
  1. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.06
    0.063420855 = product of:
      0.12684171 = sum of:
        0.12684171 = sum of:
          0.054061607 = weight(_text_:j in 3445) [ClassicSimilarity], result of:
            0.054061607 = score(doc=3445,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.4914939 = fieldWeight in 3445, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.109375 = fieldNorm(doc=3445)
          0.007118898 = weight(_text_:a in 3445) [ClassicSimilarity], result of:
            0.007118898 = score(doc=3445,freq=2.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.17835285 = fieldWeight in 3445, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.109375 = fieldNorm(doc=3445)
          0.0656612 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
            0.0656612 = score(doc=3445,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.5416616 = fieldWeight in 3445, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3445)
      0.5 = coord(1/2)
    
    Date
    25. 8.2005 17:42:22
    Type
    a
  2. Furner, J.: ¬A unifying model of document relatedness for hybrid search engines (2003) 0.03
    0.028705843 = product of:
      0.057411686 = sum of:
        0.057411686 = sum of:
          0.02316926 = weight(_text_:j in 2717) [ClassicSimilarity], result of:
            0.02316926 = score(doc=2717,freq=2.0), product of:
              0.109994456 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.034616705 = queryNorm
              0.21064025 = fieldWeight in 2717, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
          0.006101913 = weight(_text_:a in 2717) [ClassicSimilarity], result of:
            0.006101913 = score(doc=2717,freq=8.0), product of:
              0.039914686 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.034616705 = queryNorm
              0.15287387 = fieldWeight in 2717, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
          0.028140513 = weight(_text_:22 in 2717) [ClassicSimilarity], result of:
            0.028140513 = score(doc=2717,freq=2.0), product of:
              0.1212218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.034616705 = queryNorm
              0.23214069 = fieldWeight in 2717, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2717)
      0.5 = coord(1/2)
    
    Abstract
    Previous work an search-engine design has indicated that information-seekers may benefit from being given the opportunity to exploit multiple sources of evidence of document relatedness. Few existing systems, however, give users more than minimal control over the selections that may be made among methods of exploitation. By applying the methods of "document network analysis" (DNA), a unifying, graph-theoretic model of content-, collaboration-, and context-based systems (CCC) may be developed in which the nature of the similarities between types of document relatedness and document ranking are clarified. The usefulness of the approach to system design suggested by this model may be tested by constructing and evaluating a prototype system (UCXtra) that allows searchers to maintain control over the multiple ways in which document collections may be ranked and re-ranked.
    Date
    11. 9.2004 17:32:22
    Type
    a
  3. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.03
    0.027725752 = product of:
      0.055451505 = sum of:
        0.055451505 = product of:
          0.08317725 = sum of:
            0.008135883 = weight(_text_:a in 402) [ClassicSimilarity], result of:
              0.008135883 = score(doc=402,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.20383182 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
            0.07504137 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.07504137 = score(doc=402,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
    Type
    a
  4. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.03
    0.025242947 = product of:
      0.050485894 = sum of:
        0.050485894 = product of:
          0.07572884 = sum of:
            0.010067643 = weight(_text_:a in 2134) [ClassicSimilarity], result of:
              0.010067643 = score(doc=2134,freq=4.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.25222903 = fieldWeight in 2134, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
            0.0656612 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.0656612 = score(doc=2134,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:32:22
    Type
    a
  5. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.02
    0.020794313 = product of:
      0.041588627 = sum of:
        0.041588627 = product of:
          0.06238294 = sum of:
            0.006101913 = weight(_text_:a in 58) [ClassicSimilarity], result of:
              0.006101913 = score(doc=58,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.15287387 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
            0.056281026 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.056281026 = score(doc=58,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:44
    Type
    a
  6. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.02
    0.020794313 = product of:
      0.041588627 = sum of:
        0.041588627 = product of:
          0.06238294 = sum of:
            0.006101913 = weight(_text_:a in 2051) [ClassicSimilarity], result of:
              0.006101913 = score(doc=2051,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.15287387 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
            0.056281026 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.056281026 = score(doc=2051,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:56
    Type
    a
  7. Daniowicz, C.; Baliski, J.: Document ranking based upon Markov chains (2001) 0.02
    0.020393502 = product of:
      0.040787004 = sum of:
        0.040787004 = product of:
          0.061180506 = sum of:
            0.054061607 = weight(_text_:j in 5388) [ClassicSimilarity], result of:
              0.054061607 = score(doc=5388,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4914939 = fieldWeight in 5388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5388)
            0.007118898 = weight(_text_:a in 5388) [ClassicSimilarity], result of:
              0.007118898 = score(doc=5388,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.17835285 = fieldWeight in 5388, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5388)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  8. Savoy, J.; Ndarugendamwo, M.; Vrajitoru, D.: Report on the TREC-4 experiment : combining probabilistic and vector-space schemes (1996) 0.02
    0.017480146 = product of:
      0.034960292 = sum of:
        0.034960292 = product of:
          0.052440435 = sum of:
            0.04633852 = weight(_text_:j in 7574) [ClassicSimilarity], result of:
              0.04633852 = score(doc=7574,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4212805 = fieldWeight in 7574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7574)
            0.006101913 = weight(_text_:a in 7574) [ClassicSimilarity], result of:
              0.006101913 = score(doc=7574,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.15287387 = fieldWeight in 7574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7574)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  9. Belkin, N.J.; Cool, C.; Koenemann, J.; Ng, K.B.; Park, S.: Using relevance feedback and ranking in interactive searching (1996) 0.02
    0.017480146 = product of:
      0.034960292 = sum of:
        0.034960292 = product of:
          0.052440435 = sum of:
            0.04633852 = weight(_text_:j in 7588) [ClassicSimilarity], result of:
              0.04633852 = score(doc=7588,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.4212805 = fieldWeight in 7588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7588)
            0.006101913 = weight(_text_:a in 7588) [ClassicSimilarity], result of:
              0.006101913 = score(doc=7588,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.15287387 = fieldWeight in 7588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7588)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  10. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.02
    0.01553896 = product of:
      0.03107792 = sum of:
        0.03107792 = product of:
          0.04661688 = sum of:
            0.009096195 = weight(_text_:a in 1422) [ClassicSimilarity], result of:
              0.009096195 = score(doc=1422,freq=10.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.22789092 = fieldWeight in 1422, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
            0.037520684 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.037520684 = score(doc=1422,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    We propose a novel approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. The ability of the logic to handle expressive representations along with the use of such classical notions are promising characteristics for IR systems. The approach proposed here has been efficiently implemented and experiments against test collections are presented.
    Date
    22. 3.2003 19:27:23
    Type
    a
  11. Karlsson, A.; Hammarfelt, B.; Steinhauer, H.J.; Falkman, G.; Olson, N.; Nelhans, G.; Nolin, J.: Modeling uncertainty in bibliometrics and information retrieval : an information fusion approach (2015) 0.02
    0.015268869 = product of:
      0.030537738 = sum of:
        0.030537738 = product of:
          0.045806605 = sum of:
            0.03861543 = weight(_text_:j in 1696) [ClassicSimilarity], result of:
              0.03861543 = score(doc=1696,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.35106707 = fieldWeight in 1696, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1696)
            0.0071911733 = weight(_text_:a in 1696) [ClassicSimilarity], result of:
              0.0071911733 = score(doc=1696,freq=4.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.18016359 = fieldWeight in 1696, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1696)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  12. Faloutsos, C.: Signature files (1992) 0.02
    0.015218857 = product of:
      0.030437713 = sum of:
        0.030437713 = product of:
          0.04565657 = sum of:
            0.008135883 = weight(_text_:a in 3499) [ClassicSimilarity], result of:
              0.008135883 = score(doc=3499,freq=8.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.20383182 = fieldWeight in 3499, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
            0.037520684 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.037520684 = score(doc=3499,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Presents a survey and discussion on signature-based text retrieval methods. It describes the main idea behind the signature approach and its advantages over other text retrieval methods, it provides a classification of the signature methods that have appeared in the literature, it describes the main representatives of each class, together with the relative advantages and drawbacks, and it gives a list of applications as well as commercial or university prototypes that use the signature approach
    Date
    7. 5.1999 15:22:48
    Type
    a
  13. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.02
    0.015218857 = product of:
      0.030437713 = sum of:
        0.030437713 = product of:
          0.04565657 = sum of:
            0.008135883 = weight(_text_:a in 1431) [ClassicSimilarity], result of:
              0.008135883 = score(doc=1431,freq=8.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.20383182 = fieldWeight in 1431, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
            0.037520684 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.037520684 = score(doc=1431,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Properties of a percentile-based rating scale needed in bibliometrics are formulated. Based on these properties, P100 was recently introduced as a new citation-rank approach (Bornmann, Leydesdorff, & Wang, 2013). In this paper, we conceptualize P100 and propose an improvement which we call P100'. Advantages and disadvantages of citation-rank indicators are noted.
    Date
    22. 8.2014 17:05:18
    Type
    a
  14. Bar-Ilan, J.; Levene, M.: ¬The hw-rank : an h-index variant for ranking web pages (2015) 0.01
    0.014566787 = product of:
      0.029133573 = sum of:
        0.029133573 = product of:
          0.04370036 = sum of:
            0.03861543 = weight(_text_:j in 1694) [ClassicSimilarity], result of:
              0.03861543 = score(doc=1694,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.35106707 = fieldWeight in 1694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1694)
            0.0050849267 = weight(_text_:a in 1694) [ClassicSimilarity], result of:
              0.0050849267 = score(doc=1694,freq=2.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.12739488 = fieldWeight in 1694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1694)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Type
    a
  15. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.014424542 = product of:
      0.028849084 = sum of:
        0.028849084 = product of:
          0.043273624 = sum of:
            0.0057529383 = weight(_text_:a in 5108) [ClassicSimilarity], result of:
              0.0057529383 = score(doc=5108,freq=4.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.14413087 = fieldWeight in 5108, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
            0.037520684 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.037520684 = score(doc=5108,freq=2.0), product of:
                0.1212218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034616705 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 18:30:22
    Type
    a
  16. Na, S.-H.; Kang, I.-S.; Roh, J.-E.; Lee, J.-H.: ¬An empirical study of query expansion and cluster-based retrieval in language modeling approach (2007) 0.01
    0.014420383 = product of:
      0.028840765 = sum of:
        0.028840765 = product of:
          0.043261148 = sum of:
            0.038227327 = weight(_text_:j in 906) [ClassicSimilarity], result of:
              0.038227327 = score(doc=906,freq=4.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.34753868 = fieldWeight in 906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=906)
            0.0050338213 = weight(_text_:a in 906) [ClassicSimilarity], result of:
              0.0050338213 = score(doc=906,freq=4.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.12611452 = fieldWeight in 906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=906)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    The term mismatch problem in information retrieval is a critical problem, and several techniques have been developed, such as query expansion, cluster-based retrieval and dimensionality reduction to resolve this issue. Of these techniques, this paper performs an empirical study on query expansion and cluster-based retrieval. We examine the effect of using parsimony in query expansion and the effect of clustering algorithms in cluster-based retrieval. In addition, query expansion and cluster-based retrieval are compared, and their combinations are evaluated in terms of retrieval performance by performing experimentations on seven test collections of NTCIR and TREC.
    Type
    a
  17. Lee, J.-T.; Seo, J.; Jeon, J.; Rim, H.-C.: Sentence-based relevance flow analysis for high accuracy retrieval (2011) 0.01
    0.0140831005 = product of:
      0.028166201 = sum of:
        0.028166201 = product of:
          0.0422493 = sum of:
            0.033441946 = weight(_text_:j in 4746) [ClassicSimilarity], result of:
              0.033441946 = score(doc=4746,freq=6.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.304033 = fieldWeight in 4746, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4746)
            0.008807353 = weight(_text_:a in 4746) [ClassicSimilarity], result of:
              0.008807353 = score(doc=4746,freq=24.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.22065444 = fieldWeight in 4746, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4746)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Traditional ranking models for information retrieval lack the ability to make a clear distinction between relevant and nonrelevant documents at top ranks if both have similar bag-of-words representations with regard to a user query. We aim to go beyond the bag-of-words approach to document ranking in a new perspective, by representing each document as a sequence of sentences. We begin with an assumption that relevant documents are distinguishable from nonrelevant ones by sequential patterns of relevance degrees of sentences to a query. We introduce the notion of relevance flow, which refers to a stream of sentence-query relevance within a document. We then present a framework to learn a function for ranking documents effectively based on various features extracted from their relevance flows and leverage the output to enhance existing retrieval models. We validate the effectiveness of our approach by performing a number of retrieval experiments on three standard test collections, each comprising a different type of document: news articles, medical references, and blog posts. Experimental results demonstrate that the proposed approach can improve the retrieval performance at the top ranks significantly as compared with the state-of-the-art retrieval models regardless of document type.
    Type
    a
  18. Rada, R.; Barlow, J.; Potharst, J.; Zanstra, P.; Bijstra, D.: Document ranking using an enriched thesaurus (1991) 0.01
    0.013612784 = product of:
      0.027225569 = sum of:
        0.027225569 = product of:
          0.040838353 = sum of:
            0.032766283 = weight(_text_:j in 6626) [ClassicSimilarity], result of:
              0.032766283 = score(doc=6626,freq=4.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.2978903 = fieldWeight in 6626, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6626)
            0.008072072 = weight(_text_:a in 6626) [ClassicSimilarity], result of:
              0.008072072 = score(doc=6626,freq=14.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.20223314 = fieldWeight in 6626, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6626)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A thesaurus may be viewed as a graph, and document retrieval algorithms can exploit this graph when both the documents and the query are represented by thesaurus terms. These retrieval algorithms measure the distance between the query and documents by using the path lengths in the graph. Previous work witj such strategies has shown that the hierarchical relations in the thesaurus are useful but the non-hierarchical are not. This paper shows that when the query explicitly mentions a particular non-hierarchical relation, the retrieval algorithm benefits from the presence of such relations in the thesaurus. Our algorithms were applied to the Excerpta Medica bibliographic citation database whose citations are indexed with terms from the EMTREE thesaurus. We also created an enriched EMTREE by systematically adding non-hierarchical relations from a medical knowledge base. Our algorithms used at one time EMTREE and, at another time, the enriched EMTREE in the course of ranking documents from Excerpta Medica against queries. When, and only when, the query specifically mentioned a particular non-hierarchical relation type, did EMTREE enriched with that relation type lead to a ranking that better corresponded to an expert's ranking
    Type
    a
  19. Jiang, J.-D.; Jiang, J.-Y.; Cheng, P.-J.: Cocluster hypothesis and ranking consistency for relevance ranking in web search (2019) 0.01
    0.013389558 = product of:
      0.026779115 = sum of:
        0.026779115 = product of:
          0.040168673 = sum of:
            0.033441946 = weight(_text_:j in 5247) [ClassicSimilarity], result of:
              0.033441946 = score(doc=5247,freq=6.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.304033 = fieldWeight in 5247, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5247)
            0.0067267264 = weight(_text_:a in 5247) [ClassicSimilarity], result of:
              0.0067267264 = score(doc=5247,freq=14.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.1685276 = fieldWeight in 5247, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5247)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Conventional approaches to relevance ranking typically optimize ranking models by each query separately. The traditional cluster hypothesis also does not consider the dependency between related queries. The goal of this paper is to leverage similar search intents to perform ranking consistency so that the search performance can be improved accordingly. Different from the previous supervised approach, which learns relevance by click-through data, we propose a novel cocluster hypothesis to bridge the gap between relevance ranking and ranking consistency. A nearest-neighbors test is also designed to measure the extent to which the cocluster hypothesis holds. Based on the hypothesis, we further propose a two-stage unsupervised approach, in which two ranking heuristics and a cost function are developed to optimize the combination of consistency and uniqueness (or inconsistency). Extensive experiments have been conducted on a real and large-scale search engine log. The experimental results not only verify the applicability of the proposed cocluster hypothesis but also show that our approach is effective in boosting the retrieval performance of the commercial search engine and reaches a comparable performance to the supervised approach.
    Type
    a
  20. Hubert, G.; Mothe, J.: ¬An adaptable search engine for multimodal information retrieval (2009) 0.01
    0.013009409 = product of:
      0.026018819 = sum of:
        0.026018819 = product of:
          0.039028227 = sum of:
            0.030892346 = weight(_text_:j in 2951) [ClassicSimilarity], result of:
              0.030892346 = score(doc=2951,freq=2.0), product of:
                0.109994456 = queryWeight, product of:
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.034616705 = queryNorm
                0.28085366 = fieldWeight in 2951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1774964 = idf(docFreq=5010, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2951)
            0.008135883 = weight(_text_:a in 2951) [ClassicSimilarity], result of:
              0.008135883 = score(doc=2951,freq=8.0), product of:
                0.039914686 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.034616705 = queryNorm
                0.20383182 = fieldWeight in 2951, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2951)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    This article describes an information retrieval approach according to the two different search modes that exist: browsing an ontology (via categories) or defining a query in free language (via keywords). Various proposals offer approaches adapted to one of these two modes. We present a proposal leading to a system allowing the integration of both modes using the same search engine. This engine is adapted according to each possible search mode.
    Type
    a

Years

Languages

Types

  • a 337
  • el 8
  • m 7
  • s 3
  • p 2
  • r 2
  • x 2
  • More… Less…