Search (47 results, page 2 of 3)

  • × theme_ss:"Retrievalalgorithmen"
  • × year_i:[2000 TO 2010}
  1. Lopez-Pujalte, C.; Guerrero Bote, V.P.; Moya-Anegón, F. de: Evaluation of the application of genetic algorithms to relevance feedback (2003) 0.01
    0.008782767 = product of:
      0.043913834 = sum of:
        0.043913834 = weight(_text_:2003 in 2756) [ClassicSimilarity], result of:
          0.043913834 = score(doc=2756,freq=3.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.29363465 = fieldWeight in 2756, product of:
              1.7320508 = tf(freq=3.0), with freq of:
                3.0 = termFreq=3.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2756)
      0.2 = coord(1/5)
    
    Year
    2003
  2. Bar-Ilan, J.; Levene, M.; Mat-Hassan, M.: Methods for evaluating dynamic changes in search engine rankings : a case study (2006) 0.01
    0.0057368795 = product of:
      0.028684396 = sum of:
        0.028684396 = weight(_text_:2003 in 616) [ClassicSimilarity], result of:
          0.028684396 = score(doc=616,freq=2.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.19180135 = fieldWeight in 616, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.03125 = fieldNorm(doc=616)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The objective of this paper is to characterize the changes in the rankings of the top ten results of major search engines over time and to compare the rankings between these engines. Design/methodology/approach - The papers compare rankings of the top-ten results of the search engines Google and AlltheWeb on ten identical queries over a period of three weeks. Only the top-ten results were considered, since users do not normally inspect more than the first results page returned by a search engine. The experiment was repeated twice, in October 2003 and in January 2004, in order to assess changes to the top-ten results of some of the queries during the three months interval. In order to assess the changes in the rankings, three measures were computed for each data collection point and each search engine. Findings - The findings in this paper show that the rankings of AlltheWeb were highly stable over each period, while the rankings of Google underwent constant yet minor changes, with occasional major ones. Changes over time can be explained by the dynamic nature of the web or by fluctuations in the search engines' indexes. The top-ten results of the two search engines had surprisingly low overlap. With such small overlap, the task of comparing the rankings of the two engines becomes extremely challenging. Originality/value - The paper shows that because of the abundance of information on the web, ranking search results is of extreme importance. The paper compares several measures for computing the similarity between rankings of search tools, and shows that none of the measures is fully satisfactory as a standalone measure. It also demonstrates the apparent differences in the ranking algorithms of two widely used search engines.
  3. Archuby, C.G.: Interfaces se recuperacion para catalogos en linea con salidas ordenadas por probable relevancia (2000) 0.00
    0.0044417144 = product of:
      0.022208571 = sum of:
        0.022208571 = product of:
          0.066625714 = sum of:
            0.066625714 = weight(_text_:29 in 5727) [ClassicSimilarity], result of:
              0.066625714 = score(doc=5727,freq=4.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.5496386 = fieldWeight in 5727, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5727)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 1.1996 18:23:13
    Source
    Ciencia da informacao. 29(2000) no.3, S.5-13
  4. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.00
    0.004357518 = product of:
      0.021787591 = sum of:
        0.021787591 = product of:
          0.065362774 = sum of:
            0.065362774 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.065362774 = score(doc=3445,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    25. 8.2005 17:42:22
  5. Okada, M.; Ando, K.; Lee, S.S.; Hayashi, Y.; Aoe, J.I.: ¬An efficient substring search method by using delayed keyword extraction (2001) 0.00
    0.0037689195 = product of:
      0.018844597 = sum of:
        0.018844597 = product of:
          0.05653379 = sum of:
            0.05653379 = weight(_text_:29 in 6415) [ClassicSimilarity], result of:
              0.05653379 = score(doc=6415,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.46638384 = fieldWeight in 6415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6415)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 3.2002 17:24:03
  6. Mandl, T.: Tolerantes Information Retrieval : Neuronale Netze zur Erhöhung der Adaptivität und Flexibilität bei der Informationssuche (2001) 0.00
    0.0028684398 = product of:
      0.014342198 = sum of:
        0.014342198 = weight(_text_:2003 in 5965) [ClassicSimilarity], result of:
          0.014342198 = score(doc=5965,freq=2.0), product of:
            0.14955263 = queryWeight, product of:
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.034459375 = queryNorm
            0.09590068 = fieldWeight in 5965, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.339969 = idf(docFreq=1566, maxDocs=44218)
              0.015625 = fieldNorm(doc=5965)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: nfd - Information 54(2003) H.6, S.379-380 (U. Thiel): "Kannte G. Salton bei der Entwicklung des Vektorraummodells die kybernetisch orientierten Versuche mit assoziativen Speicherstrukturen? An diese und ähnliche Vermutungen, die ich vor einigen Jahren mit Reginald Ferber und anderen Kollegen diskutierte, erinnerte mich die Thematik des vorliegenden Buches. Immerhin lässt sich feststellen, dass die Vektorrepräsentation eine genial einfache Darstellung sowohl der im Information Retrieval (IR) als grundlegende Datenstruktur benutzten "inverted files" als auch der assoziativen Speichermatrizen darstellt, die sich im Laufe der Zeit Über Perzeptrons zu Neuronalen Netzen (NN) weiterentwickelten. Dieser formale Zusammenhang stimulierte in der Folge eine Reihe von Ansätzen, die Netzwerke im Retrieval zu verwenden, wobei sich, wie auch im vorliegenden Band, hybride Ansätze, die Methoden aus beiden Disziplinen kombinieren, als sehr geeignet erweisen. Aber der Reihe nach... Das Buch wurde vom Autor als Dissertation beim Fachbereich IV "Sprachen und Technik" der Universität Hildesheim eingereicht und resultiert aus einer Folge von Forschungsbeiträgen zu mehreren Projekten, an denen der Autor in der Zeit von 1995 bis 2000 an verschiedenen Standorten beteiligt war. Dies erklärt die ungewohnte Breite der Anwendungen, Szenarien und Domänen, in denen die Ergebnisse gewonnen wurden. So wird das in der Arbeit entwickelte COSIMIR Modell (COgnitive SIMilarity learning in Information Retrieval) nicht nur anhand der klassischen Cranfield-Kollektion evaluiert, sondern auch im WING-Projekt der Universität Regensburg im Faktenretrieval aus einer Werkstoffdatenbank eingesetzt. Weitere Versuche mit der als "Transformations-Netzwerk" bezeichneten Komponente, deren Aufgabe die Abbildung von Gewichtungsfunktionen zwischen zwei Termräumen ist, runden das Spektrum der Experimente ab. Aber nicht nur die vorgestellten Resultate sind vielfältig, auch der dem Leser angebotene "State-of-the-Art"-Überblick fasst in hoch informativer Breite Wesentliches aus den Gebieten IR und NN zusammen und beleuchtet die Schnittpunkte der beiden Bereiche. So werden neben den Grundlagen des Text- und Faktenretrieval die Ansätze zur Verbesserung der Adaptivität und zur Beherrschung von Heterogenität vorgestellt, während als Grundlagen Neuronaler Netze neben einer allgemeinen Einführung in die Grundbegriffe u.a. das Backpropagation-Modell, KohonenNetze und die Adaptive Resonance Theory (ART) geschildert werden. Einweiteres Kapitel stellt die bisherigen NN-orientierten Ansätze im IR vor und rundet den Abriss der relevanten Forschungslandschaft ab. Als Vorbereitung der Präsentation des COSIMIR-Modells schiebt der Autor an dieser Stelle ein diskursives Kapitel zum Thema Heterogenität im IR ein, wodurch die Ziele und Grundannahmen der Arbeit noch einmal reflektiert werden. Als Dimensionen der Heterogenität werden der Objekttyp, die Qualität der Objekte und ihrer Erschließung und die Mehrsprachigkeit genannt. Wenn auch diese Systematik im Wesentlichen die Akzente auf Probleme aus den hier tangierten Projekten legt, und weniger eine umfassende Aufbereitung z.B. der Literatur zum Problem der Relevanz anstrebt, ist sie dennoch hilfreich zum Verständnis der in den nachfolgenden Kapitel oft nur implizit angesprochenen Designentscheidungen bei der Konzeption der entwickelten Prototypen. Der Ansatz, Heterogenität durch Transformationen zu behandeln, wird im speziellen Kontext der NN konkretisiert, wobei andere Möglichkeiten, die z.B. Instrumente der Logik und Probabilistik einzusetzen, nur kurz diskutiert werden. Eine weitergehende Analyse hätte wohl auch den Rahmen der Arbeit zu weit gespannt,
  7. Thompson, P.: Looking back: on relevance, probabilistic indexing and information retrieval (2008) 0.00
    0.002512613 = product of:
      0.012563065 = sum of:
        0.012563065 = product of:
          0.037689194 = sum of:
            0.037689194 = weight(_text_:29 in 2074) [ClassicSimilarity], result of:
              0.037689194 = score(doc=2074,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.31092256 = fieldWeight in 2074, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2074)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    31. 7.2008 19:58:29
  8. Vechtomova, O.; Karamuftuoglu, M.: Lexical cohesion and term proximity in document ranking (2008) 0.00
    0.002512613 = product of:
      0.012563065 = sum of:
        0.012563065 = product of:
          0.037689194 = sum of:
            0.037689194 = weight(_text_:29 in 2101) [ClassicSimilarity], result of:
              0.037689194 = score(doc=2101,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.31092256 = fieldWeight in 2101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2101)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    1. 8.2008 12:29:05
  9. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.00
    0.0024900108 = product of:
      0.012450053 = sum of:
        0.012450053 = product of:
          0.03735016 = sum of:
            0.03735016 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.03735016 = score(doc=5108,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    20. 1.2007 18:30:22
  10. Otterbacher, J.; Erkan, G.; Radev, D.R.: Biased LexRank : passage retrieval using random walks with question-based priors (2009) 0.00
    0.0021985364 = product of:
      0.010992682 = sum of:
        0.010992682 = product of:
          0.032978043 = sum of:
            0.032978043 = weight(_text_:29 in 2450) [ClassicSimilarity], result of:
              0.032978043 = score(doc=2450,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.27205724 = fieldWeight in 2450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2450)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    22.11.2008 17:11:29
  11. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.00
    0.002178759 = product of:
      0.010893796 = sum of:
        0.010893796 = product of:
          0.032681387 = sum of:
            0.032681387 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
              0.032681387 = score(doc=3276,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.2708308 = fieldWeight in 3276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3276)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    20. 3.2005 16:23:22
  12. Cannane, A.; Williams, H.E.: General-purpose compression for efficient retrieval (2001) 0.00
    0.0018844598 = product of:
      0.0094222985 = sum of:
        0.0094222985 = product of:
          0.028266896 = sum of:
            0.028266896 = weight(_text_:29 in 5705) [ClassicSimilarity], result of:
              0.028266896 = score(doc=5705,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23319192 = fieldWeight in 5705, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5705)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 9.2001 13:59:55
  13. Kaszkiel, M.; Zobel, J.: Effective ranking with arbitrary passages (2001) 0.00
    0.0018844598 = product of:
      0.0094222985 = sum of:
        0.0094222985 = product of:
          0.028266896 = sum of:
            0.028266896 = weight(_text_:29 in 5764) [ClassicSimilarity], result of:
              0.028266896 = score(doc=5764,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23319192 = fieldWeight in 5764, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5764)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 9.2001 14:00:39
  14. Bodoff, D.; Enache, D.; Kambil, A.; Simon, G.; Yukhimets, A.: ¬A unified maximum likelihood approach to document retrieval (2001) 0.00
    0.0018844598 = product of:
      0.0094222985 = sum of:
        0.0094222985 = product of:
          0.028266896 = sum of:
            0.028266896 = weight(_text_:29 in 174) [ClassicSimilarity], result of:
              0.028266896 = score(doc=174,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23319192 = fieldWeight in 174, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=174)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 9.2001 17:52:51
  15. Drucker, H.; Shahrary, B.; Gibbon, D.C.: Support vector machines : relevance feedback and information retrieval (2002) 0.00
    0.0018844598 = product of:
      0.0094222985 = sum of:
        0.0094222985 = product of:
          0.028266896 = sum of:
            0.028266896 = weight(_text_:29 in 2581) [ClassicSimilarity], result of:
              0.028266896 = score(doc=2581,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23319192 = fieldWeight in 2581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2581)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    15. 8.2004 18:55:29
  16. Käki, M.: fKWIC: frequency-based Keyword-in-Context Index for filtering Web search results (2006) 0.00
    0.0018844598 = product of:
      0.0094222985 = sum of:
        0.0094222985 = product of:
          0.028266896 = sum of:
            0.028266896 = weight(_text_:29 in 6112) [ClassicSimilarity], result of:
              0.028266896 = score(doc=6112,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23319192 = fieldWeight in 6112, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6112)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    Enormous Web search engine databases combined with short search queries result in large result sets that are often difficult to access. Result ranking works fairly well, but users need help when it fails. For these situations, we propose a filtering interface that is inspired by keyword-in-context (KWIC) indices. The user interface lists the most frequent keyword contexts (fKWIC). When a context is selected, the corresponding results are displayed in the result list, allowing users to concentrate on the specific context. We compared the keyword context index user interface to the rank order result listing in an experiment with 36 participants. The results show that the proposed user interface was 29% faster in finding relevant results, and the precision of the selected results was 19% higher. In addition, participants showed positive attitudes toward the system.
  17. Kekäläinen, J.: Binary and graded relevance in IR evaluations : comparison of the effects on ranking of IR systems (2005) 0.00
    0.0018844598 = product of:
      0.0094222985 = sum of:
        0.0094222985 = product of:
          0.028266896 = sum of:
            0.028266896 = weight(_text_:29 in 1036) [ClassicSimilarity], result of:
              0.028266896 = score(doc=1036,freq=2.0), product of:
                0.1212173 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23319192 = fieldWeight in 1036, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1036)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    26.12.2007 20:29:18
  18. Fan, W.; Fox, E.A.; Pathak, P.; Wu, H.: ¬The effects of fitness functions an genetic programming-based ranking discovery for Web search (2004) 0.00
    0.0018675078 = product of:
      0.009337539 = sum of:
        0.009337539 = product of:
          0.028012617 = sum of:
            0.028012617 = weight(_text_:22 in 2239) [ClassicSimilarity], result of:
              0.028012617 = score(doc=2239,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23214069 = fieldWeight in 2239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2239)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    31. 5.2004 19:22:06
  19. Witschel, H.F.: Global term weights in distributed environments (2008) 0.00
    0.0018675078 = product of:
      0.009337539 = sum of:
        0.009337539 = product of:
          0.028012617 = sum of:
            0.028012617 = weight(_text_:22 in 2096) [ClassicSimilarity], result of:
              0.028012617 = score(doc=2096,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23214069 = fieldWeight in 2096, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2096)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    1. 8.2008 9:44:22
  20. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.00
    0.0018675078 = product of:
      0.009337539 = sum of:
        0.009337539 = product of:
          0.028012617 = sum of:
            0.028012617 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
              0.028012617 = score(doc=2419,freq=2.0), product of:
                0.12067086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.034459375 = queryNorm
                0.23214069 = fieldWeight in 2419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2419)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    16.11.2008 16:22:48

Languages

  • e 40
  • d 6
  • pt 1
  • More… Less…

Types

  • a 44
  • m 2
  • x 1
  • More… Less…