Search (54 results, page 2 of 3)

  • × language_ss:"e"
  • × theme_ss:"Retrievalalgorithmen"
  1. Burgin, R.: ¬The retrieval effectiveness of 5 clustering algorithms as a function of indexing exhaustivity (1995) 0.01
    0.007781266 = product of:
      0.015562532 = sum of:
        0.015562532 = product of:
          0.031125063 = sum of:
            0.031125063 = weight(_text_:22 in 3365) [ClassicSimilarity], result of:
              0.031125063 = score(doc=3365,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.19345059 = fieldWeight in 3365, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3365)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1996 11:20:06
  2. Efthimiadis, E.N.: User choices : a new yardstick for the evaluation of ranking algorithms for interactive query expansion (1995) 0.01
    0.007781266 = product of:
      0.015562532 = sum of:
        0.015562532 = product of:
          0.031125063 = sum of:
            0.031125063 = weight(_text_:22 in 5697) [ClassicSimilarity], result of:
              0.031125063 = score(doc=5697,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.19345059 = fieldWeight in 5697, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5697)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.1996 13:14:10
  3. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.01
    0.007781266 = product of:
      0.015562532 = sum of:
        0.015562532 = product of:
          0.031125063 = sum of:
            0.031125063 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
              0.031125063 = score(doc=1753,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.19345059 = fieldWeight in 1753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1753)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 12:26:32
  4. Baloh, P.; Desouza, K.C.; Hackney, R.: Contextualizing organizational interventions of knowledge management systems : a design science perspectiveA domain analysis (2012) 0.01
    0.007781266 = product of:
      0.015562532 = sum of:
        0.015562532 = product of:
          0.031125063 = sum of:
            0.031125063 = weight(_text_:22 in 241) [ClassicSimilarity], result of:
              0.031125063 = score(doc=241,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.19345059 = fieldWeight in 241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=241)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 6.2012 14:22:34
  5. Soulier, L.; Jabeur, L.B.; Tamine, L.; Bahsoun, W.: On ranking relevant entities in heterogeneous networks using a language-based model (2013) 0.01
    0.007781266 = product of:
      0.015562532 = sum of:
        0.015562532 = product of:
          0.031125063 = sum of:
            0.031125063 = weight(_text_:22 in 664) [ClassicSimilarity], result of:
              0.031125063 = score(doc=664,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.19345059 = fieldWeight in 664, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=664)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:34:49
  6. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.01
    0.006478193 = product of:
      0.012956386 = sum of:
        0.012956386 = product of:
          0.025912773 = sum of:
            0.025912773 = weight(_text_:d in 578) [ClassicSimilarity], result of:
              0.025912773 = score(doc=578,freq=16.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.296855 = fieldWeight in 578, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  7. Zhang, D.; Dong, Y.: ¬An effective algorithm to rank Web resources (2000) 0.01
    0.0064130835 = product of:
      0.012826167 = sum of:
        0.012826167 = product of:
          0.025652334 = sum of:
            0.025652334 = weight(_text_:d in 3662) [ClassicSimilarity], result of:
              0.025652334 = score(doc=3662,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.29387143 = fieldWeight in 3662, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3662)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Savoy, J.; Ndarugendamwo, M.; Vrajitoru, D.: Report on the TREC-4 experiment : combining probabilistic and vector-space schemes (1996) 0.01
    0.0054969285 = product of:
      0.010993857 = sum of:
        0.010993857 = product of:
          0.021987714 = sum of:
            0.021987714 = weight(_text_:d in 7574) [ClassicSimilarity], result of:
              0.021987714 = score(doc=7574,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.2518898 = fieldWeight in 7574, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7574)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. White, H. D.: Co-cited author retrieval and relevance theory : examples from the humanities (2015) 0.01
    0.0054969285 = product of:
      0.010993857 = sum of:
        0.010993857 = product of:
          0.021987714 = sum of:
            0.021987714 = weight(_text_:d in 1687) [ClassicSimilarity], result of:
              0.021987714 = score(doc=1687,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.2518898 = fieldWeight in 1687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1687)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Khoo, C.S.G.; Wan, K.-W.: ¬A simple relevancy-ranking strategy for an interface to Boolean OPACs (2004) 0.01
    0.0054468857 = product of:
      0.010893771 = sum of:
        0.010893771 = product of:
          0.021787543 = sum of:
            0.021787543 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
              0.021787543 = score(doc=2509,freq=2.0), product of:
                0.16089413 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045945734 = queryNorm
                0.1354154 = fieldWeight in 2509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Electronic library. 22(2004) no.2, S.112-120
  11. Information retrieval : data structures and algorithms (1992) 0.00
    0.0039670668 = product of:
      0.0079341335 = sum of:
        0.0079341335 = product of:
          0.015868267 = sum of:
            0.015868267 = weight(_text_:d in 3495) [ClassicSimilarity], result of:
              0.015868267 = score(doc=3495,freq=6.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.18178582 = fieldWeight in 3495, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3495)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    An edited volume containing data structures and algorithms for information retrieval including a disk with examples written in C. for prgrammers and students interested in parsing text, automated indexing, its the first collection in book form of the basic data structures and algorithms that are critical to the storage and retrieval of documents. ------------------Enthält die Kapitel: FRAKES, W.B.: Introduction to information storage and retrieval systems; BAEZA-YATES, R.S.: Introduction to data structures and algorithms related to information retrieval; HARMAN, D. u.a.: Inverted files; FALOUTSOS, C.: Signature files; GONNET, G.H. u.a.: New indices for text: PAT trees and PAT arrays; FORD, D.A. u. S. CHRISTODOULAKIS: File organizations for optical disks; FOX, C.: Lexical analysis and stoplists; FRAKES, W.B.: Stemming algorithms; SRINIVASAN, P.: Thesaurus construction; BAEZA-YATES, R.A.: String searching algorithms; HARMAN, D.: Relevance feedback and other query modification techniques; WARTIK, S.: Boolean operators; WARTIK, S. u.a.: Hashing algorithms; HARMAN, D.: Ranking algorithms; FOX, E.: u.a.: Extended Boolean models; RASMUSSEN, E.: Clustering algorithms; HOLLAAR, L.: Special-purpose hardware for information retrieval; STANFILL, C.: Parallel information retrieval algorithms
  12. Dang, E.K.F.; Luk, R.W.P.; Allan, J.; Ho, K.S.; Chung, K.F.L.; Lee, D.L.: ¬A new context-dependent term weight computed by boost and discount using relevance information (2010) 0.00
    0.0039670668 = product of:
      0.0079341335 = sum of:
        0.0079341335 = product of:
          0.015868267 = sum of:
            0.015868267 = weight(_text_:d in 4120) [ClassicSimilarity], result of:
              0.015868267 = score(doc=4120,freq=6.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.18178582 = fieldWeight in 4120, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4120)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We studied the effectiveness of a new class of context-dependent term weights for information retrieval. Unlike the traditional term frequency-inverse document frequency (TF-IDF), the new weighting of a term t in a document d depends not only on the occurrence statistics of t alone but also on the terms found within a text window (or "document-context") centered on t. We introduce a Boost and Discount (B&D) procedure which utilizes partial relevance information to compute the context-dependent term weights of query terms according to a logistic regression model. We investigate the effectiveness of the new term weights compared with the context-independent BM25 weights in the setting of relevance feedback. We performed experiments with title queries of the TREC-6, -7, -8, and 2005 collections, comparing the residual Mean Average Precision (MAP) measures obtained using B&D term weights and those obtained by a baseline using BM25 weights. Given either 10 or 20 relevance judgments of the top retrieved documents, using the new term weights yields improvement over the baseline for all collections tested. The MAP obtained with the new weights has relative improvement over the baseline by 3.3 to 15.2%, with statistical significance at the 95% confidence level across all four collections.
  13. Bodoff, D.; Enache, D.; Kambil, A.; Simon, G.; Yukhimets, A.: ¬A unified maximum likelihood approach to document retrieval (2001) 0.00
    0.003886916 = product of:
      0.007773832 = sum of:
        0.007773832 = product of:
          0.015547664 = sum of:
            0.015547664 = weight(_text_:d in 174) [ClassicSimilarity], result of:
              0.015547664 = score(doc=174,freq=4.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.178113 = fieldWeight in 174, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=174)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Harman, D.; Fox, E.; Baeza-Yates, R.; Lee, W.: Inverted files (1992) 0.00
    0.0036646193 = product of:
      0.0073292386 = sum of:
        0.0073292386 = product of:
          0.014658477 = sum of:
            0.014658477 = weight(_text_:d in 3497) [ClassicSimilarity], result of:
              0.014658477 = score(doc=3497,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.16792654 = fieldWeight in 3497, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3497)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Harman, D.: Relevance feedback and other query modification techniques (1992) 0.00
    0.0036646193 = product of:
      0.0073292386 = sum of:
        0.0073292386 = product of:
          0.014658477 = sum of:
            0.014658477 = weight(_text_:d in 3508) [ClassicSimilarity], result of:
              0.014658477 = score(doc=3508,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.16792654 = fieldWeight in 3508, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3508)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Harman, D.: Ranking algorithms (1992) 0.00
    0.0036646193 = product of:
      0.0073292386 = sum of:
        0.0073292386 = product of:
          0.014658477 = sum of:
            0.014658477 = weight(_text_:d in 3511) [ClassicSimilarity], result of:
              0.014658477 = score(doc=3511,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.16792654 = fieldWeight in 3511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3511)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Wilbur, W.J.: ¬A retrieval system based on automatic relevance weighting of search terms (1992) 0.00
    0.0036646193 = product of:
      0.0073292386 = sum of:
        0.0073292386 = product of:
          0.014658477 = sum of:
            0.014658477 = weight(_text_:d in 5269) [ClassicSimilarity], result of:
              0.014658477 = score(doc=5269,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.16792654 = fieldWeight in 5269, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5269)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the 55th Annual Meeting of the American Society for Information Science, Pittsburgh, 26.-29.10.92. Ed.: D. Shaw
  18. Rada, R.; Barlow, J.; Potharst, J.; Zanstra, P.; Bijstra, D.: Document ranking using an enriched thesaurus (1991) 0.00
    0.0027484642 = product of:
      0.0054969285 = sum of:
        0.0054969285 = product of:
          0.010993857 = sum of:
            0.010993857 = weight(_text_:d in 6626) [ClassicSimilarity], result of:
              0.010993857 = score(doc=6626,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.1259449 = fieldWeight in 6626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6626)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Bodoff, D.; Robertson, S.: ¬A new unified probabilistic model (2004) 0.00
    0.0027484642 = product of:
      0.0054969285 = sum of:
        0.0054969285 = product of:
          0.010993857 = sum of:
            0.010993857 = weight(_text_:d in 2129) [ClassicSimilarity], result of:
              0.010993857 = score(doc=2129,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.1259449 = fieldWeight in 2129, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2129)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Beitzel, S.M.; Jensen, E.C.; Chowdhury, A.; Grossman, D.; Frieder, O; Goharian, N.: Fusion of effective retrieval strategies in the same information retrieval system (2004) 0.00
    0.0027484642 = product of:
      0.0054969285 = sum of:
        0.0054969285 = product of:
          0.010993857 = sum of:
            0.010993857 = weight(_text_:d in 2502) [ClassicSimilarity], result of:
              0.010993857 = score(doc=2502,freq=2.0), product of:
                0.08729101 = queryWeight, product of:
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.045945734 = queryNorm
                0.1259449 = fieldWeight in 2502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.899872 = idf(docFreq=17979, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Types

  • a 48
  • el 2
  • m 2
  • s 2
  • d 1
  • More… Less…