Search (7 results, page 1 of 1)

  • × theme_ss:"Retrievalalgorithmen"
  • × year_i:[1980 TO 1990}
  1. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.012742912 = product of:
      0.05097165 = sum of:
        0.05097165 = product of:
          0.1019433 = sum of:
            0.1019433 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.1019433 = score(doc=402,freq=2.0), product of:
                0.16467917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04702661 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  2. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.011150048 = product of:
      0.044600192 = sum of:
        0.044600192 = product of:
          0.089200385 = sum of:
            0.089200385 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.089200385 = score(doc=2134,freq=2.0), product of:
                0.16467917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04702661 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:32:22
  3. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.009557184 = product of:
      0.038228735 = sum of:
        0.038228735 = product of:
          0.07645747 = sum of:
            0.07645747 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.07645747 = score(doc=58,freq=2.0), product of:
                0.16467917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04702661 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:44
  4. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.009557184 = product of:
      0.038228735 = sum of:
        0.038228735 = product of:
          0.07645747 = sum of:
            0.07645747 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.07645747 = score(doc=2051,freq=2.0), product of:
                0.16467917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04702661 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:56
  5. Ro, J.S.: ¬An evaluation of the applicability of ranking algorithms to improve the effectiveness of full-text retrieval : 1. On the effectiveness of full-text retrieval (1988) 0.01
    0.005152073 = product of:
      0.020608293 = sum of:
        0.020608293 = weight(_text_:to in 4030) [ClassicSimilarity], result of:
          0.020608293 = score(doc=4030,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.24104178 = fieldWeight in 4030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.09375 = fieldNorm(doc=4030)
      0.25 = coord(1/4)
    
  6. Deerwester, S.; Dumais, S.; Landauer, T.; Furnass, G.; Beck, L.: Improving information retrieval with latent semantic indexing (1988) 0.00
    0.0044618268 = product of:
      0.017847307 = sum of:
        0.017847307 = weight(_text_:to in 2396) [ClassicSimilarity], result of:
          0.017847307 = score(doc=2396,freq=6.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.20874833 = fieldWeight in 2396, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.046875 = fieldNorm(doc=2396)
      0.25 = coord(1/4)
    
    Abstract
    Describes a latent semantic indexing (LSI) approach for improving information retrieval. Most document retrieval systems depend on matching keywords in queries against those in documents. The LSI approach tries to overcome the incompleteness and imprecision of latent relations among terms and documents. Tested performance of the LSI method ranged from considerably better than to roughly comparable to performance based on weighted keyword matching, apparently depending on the quality of the queries. Best LSI performance was found using a global entropy weighting for terms and about 100 dimensions for representing terms, documents and queries.
  7. Srinivasan, P.: Intelligent information retrieval using rough set approximations (1989) 0.00
    0.0030053763 = product of:
      0.012021505 = sum of:
        0.012021505 = weight(_text_:to in 2526) [ClassicSimilarity], result of:
          0.012021505 = score(doc=2526,freq=2.0), product of:
            0.08549677 = queryWeight, product of:
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.04702661 = queryNorm
            0.14060771 = fieldWeight in 2526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.818051 = idf(docFreq=19512, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2526)
      0.25 = coord(1/4)
    
    Abstract
    The theory of rough sets was introduced in 1982. It allows the classification of objects into sets of equivalent members based on their attributes. Any combination of the same objetcts (or even their attributes) may be examined using the resultant classification. The theory has direct applications in the design and evaluation of classification schemes and the selection of discriminating attributes. Introductory papers discuss its application in the domain of medical diagnostic systems and the design of information retrieval systems accessing collections of documents. Advantages offered by the theory are: the implicit inclusion of Boolean logic; term weighting; and the ability to rank retrieved documents.