Search (110 results, page 1 of 6)

  • × theme_ss:"Retrievalalgorithmen"
  1. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.06
    0.055635765 = product of:
      0.11127153 = sum of:
        0.11127153 = sum of:
          0.025304178 = weight(_text_:d in 58) [ClassicSimilarity], result of:
            0.025304178 = score(doc=58,freq=2.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.2518898 = fieldWeight in 58, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.09375 = fieldNorm(doc=58)
          0.085967354 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
            0.085967354 = score(doc=58,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.46428138 = fieldWeight in 58, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=58)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:44
    Language
    d
  2. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.06
    0.055635765 = product of:
      0.11127153 = sum of:
        0.11127153 = sum of:
          0.025304178 = weight(_text_:d in 2051) [ClassicSimilarity], result of:
            0.025304178 = score(doc=2051,freq=2.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.2518898 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
          0.085967354 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
            0.085967354 = score(doc=2051,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.46428138 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
      0.5 = coord(1/2)
    
    Date
    14. 6.2015 22:12:56
    Language
    d
  3. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.04
    0.04058429 = product of:
      0.08116858 = sum of:
        0.08116858 = sum of:
          0.023857009 = weight(_text_:d in 1484) [ClassicSimilarity], result of:
            0.023857009 = score(doc=1484,freq=4.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.237484 = fieldWeight in 1484, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0625 = fieldNorm(doc=1484)
          0.057311572 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
            0.057311572 = score(doc=1484,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.30952093 = fieldWeight in 1484, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1484)
      0.5 = coord(1/2)
    
    Date
    13. 9.2014 14:45:22
    Language
    d
  4. Soulier, L.; Jabeur, L.B.; Tamine, L.; Bahsoun, W.: On ranking relevant entities in heterogeneous networks using a language-based model (2013) 0.04
    0.0393729 = sum of:
      0.021463035 = product of:
        0.08585214 = sum of:
          0.08585214 = weight(_text_:authors in 664) [ClassicSimilarity], result of:
            0.08585214 = score(doc=664,freq=4.0), product of:
              0.24105114 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052875843 = queryNorm
              0.35615736 = fieldWeight in 664, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=664)
        0.25 = coord(1/4)
      0.017909866 = product of:
        0.03581973 = sum of:
          0.03581973 = weight(_text_:22 in 664) [ClassicSimilarity], result of:
            0.03581973 = score(doc=664,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.19345059 = fieldWeight in 664, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=664)
        0.5 = coord(1/2)
    
    Abstract
    A new challenge, accessing multiple relevant entities, arises from the availability of linked heterogeneous data. In this article, we address more specifically the problem of accessing relevant entities, such as publications and authors within a bibliographic network, given an information need. We propose a novel algorithm, called BibRank, that estimates a joint relevance of documents and authors within a bibliographic network. This model ranks each type of entity using a score propagation algorithm with respect to the query topic and the structure of the underlying bi-type information entity network. Evidence sources, namely content-based and network-based scores, are both used to estimate the topical similarity between connected entities. For this purpose, authorship relationships are analyzed through a language model-based score on the one hand and on the other hand, non topically related entities of the same type are detected through marginal citations. The article reports the results of experiments using the Bibrank algorithm for an information retrieval task. The CiteSeerX bibliographic data set forms the basis for the topical query automatic generation and evaluation. We show that a statistically significant improvement over closely related ranking models is achieved.
    Date
    22. 3.2013 19:34:49
  5. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.04
    0.036373667 = sum of:
      0.021463035 = product of:
        0.08585214 = sum of:
          0.08585214 = weight(_text_:authors in 578) [ClassicSimilarity], result of:
            0.08585214 = score(doc=578,freq=4.0), product of:
              0.24105114 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052875843 = queryNorm
              0.35615736 = fieldWeight in 578, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.25 = coord(1/4)
      0.014910631 = product of:
        0.029821262 = sum of:
          0.029821262 = weight(_text_:d in 578) [ClassicSimilarity], result of:
            0.029821262 = score(doc=578,freq=16.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.296855 = fieldWeight in 578, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0390625 = fieldNorm(doc=578)
        0.5 = coord(1/2)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  6. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.03
    0.0324542 = product of:
      0.0649084 = sum of:
        0.0649084 = sum of:
          0.01476077 = weight(_text_:d in 3276) [ClassicSimilarity], result of:
            0.01476077 = score(doc=3276,freq=2.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.14693572 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
          0.050147627 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
            0.050147627 = score(doc=3276,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.2708308 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
      0.5 = coord(1/2)
    
    Date
    20. 3.2005 16:23:22
    Language
    d
  7. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.03
    0.028655786 = product of:
      0.057311572 = sum of:
        0.057311572 = product of:
          0.114623144 = sum of:
            0.114623144 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.114623144 = score(doc=402,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  8. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.03
    0.025073813 = product of:
      0.050147627 = sum of:
        0.050147627 = product of:
          0.10029525 = sum of:
            0.10029525 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.10029525 = score(doc=2134,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 3.2001 13:32:22
  9. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.03
    0.025073813 = product of:
      0.050147627 = sum of:
        0.050147627 = product of:
          0.10029525 = sum of:
            0.10029525 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.10029525 = score(doc=3445,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    25. 8.2005 17:42:22
  10. Information retrieval : data structures and algorithms (1992) 0.02
    0.024307515 = sum of:
      0.015176657 = product of:
        0.060706627 = sum of:
          0.060706627 = weight(_text_:authors in 3495) [ClassicSimilarity], result of:
            0.060706627 = score(doc=3495,freq=2.0), product of:
              0.24105114 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052875843 = queryNorm
              0.25184128 = fieldWeight in 3495, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3495)
        0.25 = coord(1/4)
      0.00913086 = product of:
        0.01826172 = sum of:
          0.01826172 = weight(_text_:d in 3495) [ClassicSimilarity], result of:
            0.01826172 = score(doc=3495,freq=6.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.18178582 = fieldWeight in 3495, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3495)
        0.5 = coord(1/2)
    
    Abstract
    The book consists of separate chapters by some 20 different authors. It covers many of the information retrieval algorithms, including methods of file organization, file search and access, and query processing
    Content
    An edited volume containing data structures and algorithms for information retrieval including a disk with examples written in C. for prgrammers and students interested in parsing text, automated indexing, its the first collection in book form of the basic data structures and algorithms that are critical to the storage and retrieval of documents. ------------------Enthält die Kapitel: FRAKES, W.B.: Introduction to information storage and retrieval systems; BAEZA-YATES, R.S.: Introduction to data structures and algorithms related to information retrieval; HARMAN, D. u.a.: Inverted files; FALOUTSOS, C.: Signature files; GONNET, G.H. u.a.: New indices for text: PAT trees and PAT arrays; FORD, D.A. u. S. CHRISTODOULAKIS: File organizations for optical disks; FOX, C.: Lexical analysis and stoplists; FRAKES, W.B.: Stemming algorithms; SRINIVASAN, P.: Thesaurus construction; BAEZA-YATES, R.A.: String searching algorithms; HARMAN, D.: Relevance feedback and other query modification techniques; WARTIK, S.: Boolean operators; WARTIK, S. u.a.: Hashing algorithms; HARMAN, D.: Ranking algorithms; FOX, E.: u.a.: Extended Boolean models; RASMUSSEN, E.: Clustering algorithms; HOLLAAR, L.: Special-purpose hardware for information retrieval; STANFILL, C.: Parallel information retrieval algorithms
  11. Song, D.; Bruza, P.D.: Towards context sensitive information inference (2003) 0.02
    0.023181569 = product of:
      0.046363138 = sum of:
        0.046363138 = sum of:
          0.010543408 = weight(_text_:d in 1428) [ClassicSimilarity], result of:
            0.010543408 = score(doc=1428,freq=2.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.104954086 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
          0.03581973 = weight(_text_:22 in 1428) [ClassicSimilarity], result of:
            0.03581973 = score(doc=1428,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.19345059 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
      0.5 = coord(1/2)
    
    Date
    22. 3.2003 19:35:46
  12. Shiri, A.A.; Revie, C.: Query expansion behavior within a thesaurus-enhanced search environment : a user-centered evaluation (2006) 0.02
    0.023181569 = product of:
      0.046363138 = sum of:
        0.046363138 = sum of:
          0.010543408 = weight(_text_:d in 56) [ClassicSimilarity], result of:
            0.010543408 = score(doc=56,freq=2.0), product of:
              0.10045733 = queryWeight, product of:
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.052875843 = queryNorm
              0.104954086 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.899872 = idf(docFreq=17979, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
          0.03581973 = weight(_text_:22 in 56) [ClassicSimilarity], result of:
            0.03581973 = score(doc=56,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.19345059 = fieldWeight in 56, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=56)
      0.5 = coord(1/2)
    
    Abstract
    The study reported here investigated the query expansion behavior of end-users interacting with a thesaurus-enhanced search system on the Web. Two groups, namely academic staff and postgraduate students, were recruited into this study. Data were collected from 90 searches performed by 30 users using the OVID interface to the CAB abstracts database. Data-gathering techniques included questionnaires, screen capturing software, and interviews. The results presented here relate to issues of search-topic and search-term characteristics, number and types of expanded queries, usefulness of thesaurus terms, and behavioral differences between academic staff and postgraduate students in their interaction. The key conclusions drawn were that (a) academic staff chose more narrow and synonymous terms than did postgraduate students, who generally selected broader and related terms; (b) topic complexity affected users' interaction with the thesaurus in that complex topics required more query expansion and search term selection; (c) users' prior topic-search experience appeared to have a significant effect on their selection and evaluation of thesaurus terms; (d) in 50% of the searches where additional terms were suggested from the thesaurus, users stated that they had not been aware of the terms at the beginning of the search; this observation was particularly noticeable in the case of postgraduate students.
    Date
    22. 7.2006 16:32:43
  13. Khoo, C.S.G.; Wan, K.-W.: ¬A simple relevancy-ranking strategy for an interface to Boolean OPACs (2004) 0.02
    0.023160566 = sum of:
      0.010623659 = product of:
        0.042494636 = sum of:
          0.042494636 = weight(_text_:authors in 2509) [ClassicSimilarity], result of:
            0.042494636 = score(doc=2509,freq=2.0), product of:
              0.24105114 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052875843 = queryNorm
              0.17628889 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2509)
        0.25 = coord(1/4)
      0.012536907 = product of:
        0.025073813 = sum of:
          0.025073813 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
            0.025073813 = score(doc=2509,freq=2.0), product of:
              0.18516219 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052875843 = queryNorm
              0.1354154 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2509)
        0.5 = coord(1/2)
    
    Abstract
    A relevancy-ranking algorithm for a natural language interface to Boolean online public access catalogs (OPACs) was formulated and compared with that currently used in a knowledge-based search interface called the E-Referencer, being developed by the authors. The algorithm makes use of seven weIl-known ranking criteria: breadth of match, section weighting, proximity of query words, variant word forms (stemming), document frequency, term frequency and document length. The algorithm converts a natural language query into a series of increasingly broader Boolean search statements. In a small experiment with ten subjects in which the algorithm was simulated by hand, the algorithm obtained good results with a mean overall precision of 0.42 and mean average precision of 0.62, representing a 27 percent improvement in precision and 41 percent improvement in average precision compared to the E-Referencer. The usefulness of each step in the algorithm was analyzed and suggestions are made for improving the algorithm.
    Source
    Electronic library. 22(2004) no.2, S.112-120
  14. Ding, Y.; Yan, E.; Frazho, A.; Caverlee, J.: PageRank for ranking authors in co-citation networks (2009) 0.02
    0.022305038 = product of:
      0.044610076 = sum of:
        0.044610076 = product of:
          0.1784403 = sum of:
            0.1784403 = weight(_text_:authors in 3161) [ClassicSimilarity], result of:
              0.1784403 = score(doc=3161,freq=12.0), product of:
                0.24105114 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052875843 = queryNorm
                0.7402591 = fieldWeight in 3161, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3161)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    This paper studies how varied damping factors in the PageRank algorithm influence the ranking of authors and proposes weighted PageRank algorithms. We selected the 108 most highly cited authors in the information retrieval (IR) area from the 1970s to 2008 to form the author co-citation network. We calculated the ranks of these 108 authors based on PageRank with the damping factor ranging from 0.05 to 0.95. In order to test the relationship between different measures, we compared PageRank and weighted PageRank results with the citation ranking, h-index, and centrality measures. We found that in our author co-citation network, citation rank is highly correlated with PageRank with different damping factors and also with different weighted PageRank algorithms; citation rank and PageRank are not significantly correlated with centrality measures; and h-index rank does not significantly correlate with centrality measures but does significantly correlate with other measures. The key factors that have impact on the PageRank of authors in the author co-citation network are being co-cited with important authors.
  15. Ding, Y.: Topic-based PageRank on author cocitation networks (2011) 0.02
    0.015772045 = product of:
      0.03154409 = sum of:
        0.03154409 = product of:
          0.12617636 = sum of:
            0.12617636 = weight(_text_:authors in 4348) [ClassicSimilarity], result of:
              0.12617636 = score(doc=4348,freq=6.0), product of:
                0.24105114 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052875843 = queryNorm
                0.52344227 = fieldWeight in 4348, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4348)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Ranking authors is vital for identifying a researcher's impact and standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The author-conference-topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal component analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.
  16. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.01
    0.014327893 = product of:
      0.028655786 = sum of:
        0.028655786 = product of:
          0.057311572 = sum of:
            0.057311572 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.057311572 = score(doc=5108,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 18:30:22
  17. Faloutsos, C.: Signature files (1992) 0.01
    0.014327893 = product of:
      0.028655786 = sum of:
        0.028655786 = product of:
          0.057311572 = sum of:
            0.057311572 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.057311572 = score(doc=3499,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 5.1999 15:22:48
  18. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.01
    0.014327893 = product of:
      0.028655786 = sum of:
        0.028655786 = product of:
          0.057311572 = sum of:
            0.057311572 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.057311572 = score(doc=1422,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2003 19:27:23
  19. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.01
    0.014327893 = product of:
      0.028655786 = sum of:
        0.028655786 = product of:
          0.057311572 = sum of:
            0.057311572 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.057311572 = score(doc=1431,freq=2.0), product of:
                0.18516219 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052875843 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2014 17:05:18
  20. Wei, F.; Li, W.; Liu, S.: iRANK: a rank-learn-combine framework for unsupervised ensemble ranking (2010) 0.01
    0.013143372 = product of:
      0.026286744 = sum of:
        0.026286744 = product of:
          0.105146974 = sum of:
            0.105146974 = weight(_text_:authors in 3472) [ClassicSimilarity], result of:
              0.105146974 = score(doc=3472,freq=6.0), product of:
                0.24105114 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.052875843 = queryNorm
                0.43620193 = fieldWeight in 3472, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3472)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    The authors address the problem of unsupervised ensemble ranking. Traditional approaches either combine multiple ranking criteria into a unified representation to obtain an overall ranking score or to utilize certain rank fusion or aggregation techniques to combine the ranking results. Beyond the aforementioned combine-then-rank and rank-then-combine approaches, the authors propose a novel rank-learn-combine ranking framework, called Interactive Ranking (iRANK), which allows two base rankers to teach each other before combination during the ranking process by providing their own ranking results as feedback to the others to boost the ranking performance. This mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. The authors further design two ranking refinement strategies to efficiently and effectively use the feedback based on reasonable assumptions and rational analysis. Although iRANK is applicable to many applications, as a case study, they apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 and 2006 data sets. The results are encouraging with consistent and promising improvements.

Years

Languages

  • e 69
  • d 40
  • pt 1
  • More… Less…

Types

  • a 91
  • x 7
  • m 6
  • el 3
  • r 2
  • s 2
  • d 1
  • More… Less…