Search (65 results, page 1 of 4)

  • × theme_ss:"Retrievalalgorithmen"
  1. Agosti, M.; Pretto, L.: ¬A theoretical study of a generalized version of kleinberg's HITS algorithm (2005) 0.03
    0.027318506 = product of:
      0.06829626 = sum of:
        0.059353806 = weight(_text_:relation in 4) [ClassicSimilarity], result of:
          0.059353806 = score(doc=4,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 4, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4)
        0.008942453 = product of:
          0.02682736 = sum of:
            0.02682736 = weight(_text_:29 in 4) [ClassicSimilarity], result of:
              0.02682736 = score(doc=4,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.19432661 = fieldWeight in 4, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Kleinberg's HITS (Hyperlink-Induced Topic Search) algorithm (Kleinberg 1999), which was originally developed in a Web context, tries to infer the authoritativeness of a Web page in relation to a specific query using the structure of a subgraph of the Web graph, which is obtained considering this specific query. Recent applications of this algorithm in contexts far removed from that of Web searching (Bacchin, Ferro and Melucci 2002, Ng et al. 2001) inspired us to study the algorithm in the abstract, independently of its particular applications, trying to mathematically illuminate its behaviour. In the present paper we detail this theoretical analysis. The original work starts from the definition of a revised and more general version of the algorithm, which includes the classic one as a particular case. We perform an analysis of the structure of two particular matrices, essential to studying the behaviour of the algorithm, and we prove the convergence of the algorithm in the most general case, finding the analytic expression of the vectors to which it converges. Then we study the symmetry of the algorithm and prove the equivalence between the existence of symmetry and the independence from the order of execution of some basic operations on initial vectors. Finally, we expound some interesting consequences of our theoretical results.
    Date
    31.12.1996 19:29:41
  2. Silva, R.M.; Gonçalves, M.A.; Veloso, A.: ¬A Two-stage active learning method for learning to rank (2014) 0.03
    0.027318506 = product of:
      0.06829626 = sum of:
        0.059353806 = weight(_text_:relation in 1184) [ClassicSimilarity], result of:
          0.059353806 = score(doc=1184,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1184)
        0.008942453 = product of:
          0.02682736 = sum of:
            0.02682736 = weight(_text_:29 in 1184) [ClassicSimilarity], result of:
              0.02682736 = score(doc=1184,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.19432661 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Learning to rank (L2R) algorithms use a labeled training set to generate a ranking model that can later be used to rank new query results. These training sets are costly and laborious to produce, requiring human annotators to assess the relevance or order of the documents in relation to a query. Active learning algorithms are able to reduce the labeling effort by selectively sampling an unlabeled set and choosing data instances that maximize a learning function's effectiveness. In this article, we propose a novel two-stage active learning method for L2R that combines and exploits interesting properties of its constituent parts, thus being effective and practical. In the first stage, an association rule active sampling algorithm is used to select a very small but effective initial training set. In the second stage, a query-by-committee strategy trained with the first-stage set is used to iteratively select more examples until a preset labeling budget is met or a target effectiveness is achieved. We test our method with various LETOR benchmarking data sets and compare it with several baselines to show that it achieves good results using only a small portion of the original training sets.
    Date
    26. 1.2014 20:29:57
  3. Baloh, P.; Desouza, K.C.; Hackney, R.: Contextualizing organizational interventions of knowledge management systems : a design science perspectiveA domain analysis (2012) 0.03
    0.027286327 = product of:
      0.06821582 = sum of:
        0.059353806 = weight(_text_:relation in 241) [ClassicSimilarity], result of:
          0.059353806 = score(doc=241,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.2890464 = fieldWeight in 241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=241)
        0.008862011 = product of:
          0.026586032 = sum of:
            0.026586032 = weight(_text_:22 in 241) [ClassicSimilarity], result of:
              0.026586032 = score(doc=241,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.19345059 = fieldWeight in 241, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=241)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    We address how individuals' (workers) knowledge needs influence the design of knowledge management systems (KMS), enabling knowledge creation and utilization. It is evident that KMS technologies and activities are indiscriminately deployed in most organizations with little regard to the actual context of their adoption. Moreover, it is apparent that the extant literature pertaining to knowledge management projects is frequently deficient in identifying the variety of factors indicative for successful KMS. This presents an obvious business practice and research gap that requires a critical analysis of the necessary intervention that will actually improve how workers can leverage and form organization-wide knowledge. This research involved an extensive review of the literature, a grounded theory methodological approach and rigorous data collection and synthesis through an empirical case analysis (Parsons Brinckerhoff and Samsung). The contribution of this study is the formulation of a model for designing KMS based upon the design science paradigm, which aspires to create artifacts that are interdependent of people and organizations. The essential proposition is that KMS design and implementation must be contextualized in relation to knowledge needs and that these will differ for various organizational settings. The findings present valuable insights and further understanding of the way in which KMS design efforts should be focused.
    Date
    11. 6.2012 14:22:34
  4. Rada, R.; Barlow, J.; Potharst, J.; Zanstra, P.; Bijstra, D.: Document ranking using an enriched thesaurus (1991) 0.02
    0.024672914 = product of:
      0.12336457 = sum of:
        0.12336457 = weight(_text_:relation in 6626) [ClassicSimilarity], result of:
          0.12336457 = score(doc=6626,freq=6.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.60077167 = fieldWeight in 6626, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.046875 = fieldNorm(doc=6626)
      0.2 = coord(1/5)
    
    Abstract
    A thesaurus may be viewed as a graph, and document retrieval algorithms can exploit this graph when both the documents and the query are represented by thesaurus terms. These retrieval algorithms measure the distance between the query and documents by using the path lengths in the graph. Previous work witj such strategies has shown that the hierarchical relations in the thesaurus are useful but the non-hierarchical are not. This paper shows that when the query explicitly mentions a particular non-hierarchical relation, the retrieval algorithm benefits from the presence of such relations in the thesaurus. Our algorithms were applied to the Excerpta Medica bibliographic citation database whose citations are indexed with terms from the EMTREE thesaurus. We also created an enriched EMTREE by systematically adding non-hierarchical relations from a medical knowledge base. Our algorithms used at one time EMTREE and, at another time, the enriched EMTREE in the course of ranking documents from Excerpta Medica against queries. When, and only when, the query specifically mentioned a particular non-hierarchical relation type, did EMTREE enriched with that relation type lead to a ranking that better corresponded to an expert's ranking
  5. Liu, X.; Zheng, W.; Fang, H.: ¬An exploration of ranking models and feedback method for related entity finding (2013) 0.02
    0.023741523 = product of:
      0.11870761 = sum of:
        0.11870761 = weight(_text_:relation in 2714) [ClassicSimilarity], result of:
          0.11870761 = score(doc=2714,freq=8.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.5780928 = fieldWeight in 2714, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2714)
      0.2 = coord(1/5)
    
    Abstract
    Most existing search engines focus on document retrieval. However, information needs are certainly not limited to finding relevant documents. Instead, a user may want to find relevant entities such as persons and organizations. In this paper, we study the problem of related entity finding. Our goal is to rank entities based on their relevance to a structured query, which specifies an input entity, the type of related entities and the relation between the input and related entities. We first discuss a general probabilistic framework, derive six possible retrieval models to rank the related entities, and then compare these models both analytically and empirically. To further improve performance, we study the problem of feedback in the context of related entity finding. Specifically, we propose a mixture model based feedback method that can utilize the pseudo feedback entities to estimate an enriched model for the relation between the input and related entities. Experimental results over two standard TREC collections show that the derived relation generation model combined with a relation feedback method performs better than other models.
  6. Bhansali, D.; Desai, H.; Deulkar, K.: ¬A study of different ranking approaches for semantic search (2015) 0.02
    0.016787792 = product of:
      0.083938956 = sum of:
        0.083938956 = weight(_text_:relation in 2696) [ClassicSimilarity], result of:
          0.083938956 = score(doc=2696,freq=4.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.40877336 = fieldWeight in 2696, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2696)
      0.2 = coord(1/5)
    
    Abstract
    Search Engines have become an integral part of our day to day life. Our reliance on search engines increases with every passing day. With the amount of data available on Internet increasing exponentially, it becomes important to develop new methods and tools that help to return results relevant to the queries and reduce the time spent on searching. The results should be diverse but at the same time should return results focused on the queries asked. Relation Based Page Rank [4] algorithms are considered to be the next frontier in improvement of Semantic Web Search. The probability of finding relevance in the search results as posited by the user while entering the query is used to measure the relevance. However, its application is limited by the complexity of determining relation between the terms and assigning explicit meaning to each term. Trust Rank is one of the most widely used ranking algorithms for semantic web search. Few other ranking algorithms like HITS algorithm, PageRank algorithm are also used for Semantic Web Searching. In this paper, we will provide a comparison of few ranking approaches.
  7. Bhogal, J.; Macfarlane, A.; Smith, P.: ¬A review of ontology based query expansion (2007) 0.02
    0.016619066 = product of:
      0.08309533 = sum of:
        0.08309533 = weight(_text_:relation in 919) [ClassicSimilarity], result of:
          0.08309533 = score(doc=919,freq=2.0), product of:
            0.20534351 = queryWeight, product of:
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.03924537 = queryNorm
            0.40466496 = fieldWeight in 919, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.232299 = idf(docFreq=641, maxDocs=44218)
              0.0546875 = fieldNorm(doc=919)
      0.2 = coord(1/5)
    
    Abstract
    This paper examines the meaning of context in relation to ontology based query expansion and contains a review of query expansion approaches. The various query expansion approaches include relevance feedback, corpus dependent knowledge models and corpus independent knowledge models. Case studies detailing query expansion using domain-specific and domain-independent ontologies are also included. The penultimate section attempts to synthesise the information obtained from the review and provide success factors in using an ontology for query expansion. Finally the area of further research in applying context from an ontology to query expansion within a newswire domain is described.
  8. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.005671687 = product of:
      0.028358433 = sum of:
        0.028358433 = product of:
          0.0850753 = sum of:
            0.0850753 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.0850753 = score(doc=402,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  9. Archuby, C.G.: Interfaces se recuperacion para catalogos en linea con salidas ordenadas por probable relevancia (2000) 0.01
    0.005058616 = product of:
      0.025293078 = sum of:
        0.025293078 = product of:
          0.07587923 = sum of:
            0.07587923 = weight(_text_:29 in 5727) [ClassicSimilarity], result of:
              0.07587923 = score(doc=5727,freq=4.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.5496386 = fieldWeight in 5727, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5727)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 1.1996 18:23:13
    Source
    Ciencia da informacao. 29(2000) no.3, S.5-13
  10. Crestani, F.: Combination of similarity measures for effective spoken document retrieval (2003) 0.01
    0.0050077736 = product of:
      0.025038868 = sum of:
        0.025038868 = product of:
          0.075116605 = sum of:
            0.075116605 = weight(_text_:29 in 4690) [ClassicSimilarity], result of:
              0.075116605 = score(doc=4690,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.5441145 = fieldWeight in 4690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4690)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Journal of information science. 29(2003) no.2, S.87-96
  11. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.00
    0.004962726 = product of:
      0.02481363 = sum of:
        0.02481363 = product of:
          0.07444089 = sum of:
            0.07444089 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.07444089 = score(doc=2134,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    30. 3.2001 13:32:22
  12. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.00
    0.004962726 = product of:
      0.02481363 = sum of:
        0.02481363 = product of:
          0.07444089 = sum of:
            0.07444089 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.07444089 = score(doc=3445,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    25. 8.2005 17:42:22
  13. Okada, M.; Ando, K.; Lee, S.S.; Hayashi, Y.; Aoe, J.I.: ¬An efficient substring search method by using delayed keyword extraction (2001) 0.00
    0.0042923777 = product of:
      0.021461887 = sum of:
        0.021461887 = product of:
          0.06438566 = sum of:
            0.06438566 = weight(_text_:29 in 6415) [ClassicSimilarity], result of:
              0.06438566 = score(doc=6415,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.46638384 = fieldWeight in 6415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6415)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 3.2002 17:24:03
  14. Cole, C.: Intelligent information retrieval: diagnosing information need : Part II: uncertainty expansion in a prototype of a diagnostic IR tool (1998) 0.00
    0.0042923777 = product of:
      0.021461887 = sum of:
        0.021461887 = product of:
          0.06438566 = sum of:
            0.06438566 = weight(_text_:29 in 6432) [ClassicSimilarity], result of:
              0.06438566 = score(doc=6432,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.46638384 = fieldWeight in 6432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6432)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    11. 8.2001 14:48:29
  15. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.00
    0.004253765 = product of:
      0.021268826 = sum of:
        0.021268826 = product of:
          0.063806474 = sum of:
            0.063806474 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.063806474 = score(doc=58,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    14. 6.2015 22:12:44
  16. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.00
    0.004253765 = product of:
      0.021268826 = sum of:
        0.021268826 = product of:
          0.063806474 = sum of:
            0.063806474 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.063806474 = score(doc=2051,freq=2.0), product of:
                0.13743061 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03924537 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    14. 6.2015 22:12:56
  17. Zhang, W.; Korf, R.E.: Performance of linear-space search algorithms (1995) 0.00
    0.0035769814 = product of:
      0.017884906 = sum of:
        0.017884906 = product of:
          0.05365472 = sum of:
            0.05365472 = weight(_text_:29 in 4744) [ClassicSimilarity], result of:
              0.05365472 = score(doc=4744,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.38865322 = fieldWeight in 4744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4744)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    2. 8.1996 10:29:15
  18. Hüther, H.: Selix im DFG-Projekt Kascade (1998) 0.00
    0.0035769814 = product of:
      0.017884906 = sum of:
        0.017884906 = product of:
          0.05365472 = sum of:
            0.05365472 = weight(_text_:29 in 5151) [ClassicSimilarity], result of:
              0.05365472 = score(doc=5151,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.38865322 = fieldWeight in 5151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5151)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    25. 8.2000 19:55:29
  19. Uratani, N.; Takeda, M.: ¬A fast string-searching algorithm for multiple patterns (1993) 0.00
    0.0028615852 = product of:
      0.0143079255 = sum of:
        0.0143079255 = product of:
          0.042923775 = sum of:
            0.042923775 = weight(_text_:29 in 6275) [ClassicSimilarity], result of:
              0.042923775 = score(doc=6275,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.31092256 = fieldWeight in 6275, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6275)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Source
    Information processing and management. 29(1993) no.6, S.775-791
  20. Chakrabarti, S.; Dom, B.; Kumar, S.R.; Raghavan, P.; Rajagopalan, S.; Tomkins, A.; Kleinberg, J.M.; Gibson, D.: Neue Pfade durch den Internet-Dschungel : Die zweite Generation von Web-Suchmaschinen (1999) 0.00
    0.0028615852 = product of:
      0.0143079255 = sum of:
        0.0143079255 = product of:
          0.042923775 = sum of:
            0.042923775 = weight(_text_:29 in 3) [ClassicSimilarity], result of:
              0.042923775 = score(doc=3,freq=2.0), product of:
                0.13805294 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03924537 = queryNorm
                0.31092256 = fieldWeight in 3, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    31.12.1996 19:29:41

Years

Languages

  • e 53
  • d 11
  • pt 1
  • More… Less…

Types

  • a 62
  • el 1
  • m 1
  • r 1
  • x 1
  • More… Less…