Search (65 results, page 1 of 4)

  • × theme_ss:"Retrievalalgorithmen"
  1. Kang, I.-H.; Kim, G.C.: Integration of multiple evidences based on a query type for web search (2004) 0.02
    0.022169173 = product of:
      0.13301504 = sum of:
        0.13301504 = weight(_text_:homepage in 2568) [ClassicSimilarity], result of:
          0.13301504 = score(doc=2568,freq=4.0), product of:
            0.25096318 = queryWeight, product of:
              6.784232 = idf(docFreq=135, maxDocs=44218)
              0.03699213 = queryNorm
            0.53001815 = fieldWeight in 2568, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.784232 = idf(docFreq=135, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2568)
      0.16666667 = coord(1/6)
    
    Abstract
    The massive and heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web pages are not enough to find answer pages. PageRank compensates for the insufficiencies of content information. The content information and PageRank are combined to get better results. However, static combination of multiple evidences may lower the retrieval performance. We have to use different strategies to meet the need of a user. We can classify user queries as three categories according to users' intent, the topic relevance task, the homepage finding task, and the service finding task. In this paper, we present a user query classification method. The difference of distribution, mutual information, the usage rate as anchor texts and the POS information are used for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.
  2. Wills, R.S.: Google's PageRank : the math behind the search engine (2006) 0.01
    0.012540778 = product of:
      0.075244665 = sum of:
        0.075244665 = weight(_text_:homepage in 5954) [ClassicSimilarity], result of:
          0.075244665 = score(doc=5954,freq=2.0), product of:
            0.25096318 = queryWeight, product of:
              6.784232 = idf(docFreq=135, maxDocs=44218)
              0.03699213 = queryNorm
            0.29982352 = fieldWeight in 5954, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.784232 = idf(docFreq=135, maxDocs=44218)
              0.03125 = fieldNorm(doc=5954)
      0.16666667 = coord(1/6)
    
    Abstract
    Approximately 91 million American adults use the Internet on a typical day The number-one Internet activity is reading and writing e-mail. Search engine use is next in line and continues to increase in popularity. In fact, survey findings indicate that nearly 60 million American adults use search engines on a given day. Even though there are many Internet search engines, Google, Yahoo!, and MSN receive over 81% of all search requests. Despite claims that the quality of search provided by Yahoo! and MSN now equals that of Google, Google continues to thrive as the search engine of choice, receiving over 46% of all search requests, nearly double the volume of Yahoo! and over four times that of MSN. I use Google's search engine on a daily basis and rarely request information from other search engines. One day, I decided to visit the homepages of Google. Yahoo!, and MSN to compare the quality of search results. Coffee was on my mind that day, so I entered the simple query "coffee" in the search box at each homepage. Table 1 shows the top ten (unsponsored) results returned by each search engine. Although ordered differently, two webpages, www.peets.com and www.coffeegeek.com, appear in all three top ten lists. In addition, each pairing of top ten lists has two additional results in common. Depending on the information I hoped to obtain about coffee by using the search engines, I could argue that any one of the three returned better results: however, I was not looking for a particular webpage, so all three listings of search results seemed of equal quality. Thus, I plan to continue using Google. My decision is indicative of the problem Yahoo!, MSN, and other search engine companies face in the quest to obtain a larger percentage of Internet search volume. Search engine users are loyal to one or a few search engines and are generally happy with search results. Thus, as long as Google continues to provide results deemed high in quality, Google likely will remain the top search engine. But what set Google apart from its competitors in the first place? The answer is PageRank. In this article I explain this simple mathematical algorithm that revolutionized Web search.
  3. Henzinger, M.R.: Link analysis in Web information retrieval (2000) 0.01
    0.012540778 = product of:
      0.075244665 = sum of:
        0.075244665 = weight(_text_:homepage in 801) [ClassicSimilarity], result of:
          0.075244665 = score(doc=801,freq=2.0), product of:
            0.25096318 = queryWeight, product of:
              6.784232 = idf(docFreq=135, maxDocs=44218)
              0.03699213 = queryNorm
            0.29982352 = fieldWeight in 801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.784232 = idf(docFreq=135, maxDocs=44218)
              0.03125 = fieldNorm(doc=801)
      0.16666667 = coord(1/6)
    
    Content
    The goal of information retrieval is to find all documents relevant for a user query in a collection of documents. Decades of research in information retrieval were successful in developing and refining techniques that are solely word-based (see e.g., [2]). With the advent of the web new sources of information became available, one of them being the hyperlinks between documents and records of user behavior. To be precise, hypertexts (i.e., collections of documents connected by hyperlinks) have existed and have been studied for a long time. What was new was the large number of hyperlinks created by independent individuals. Hyperlinks provide a valuable source of information for web information retrieval as we will show in this article. This area of information retrieval is commonly called link analysis. Why would one expect hyperlinks to be useful? Ahyperlink is a reference of a web page B that is contained in a web page A. When the hyperlink is clicked on in a web browser, the browser displays page B. This functionality alone is not helpful for web information retrieval. However, the way hyperlinks are typically used by authors of web pages can give them valuable information content. Typically, authors create links because they think they will be useful for the readers of the pages. Thus, links are usually either navigational aids that, for example, bring the reader back to the homepage of the site, or links that point to pages whose content augments the content of the current page. The second kind of links tend to point to high-quality pages that might be on the same topic as the page containing the link.
  4. Effektive Information Retrieval Verfahren in Theorie und Praxis : ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005 (2006) 0.01
    0.0065558683 = product of:
      0.03933521 = sum of:
        0.03933521 = weight(_text_:gestaltung in 5973) [ClassicSimilarity], result of:
          0.03933521 = score(doc=5973,freq=4.0), product of:
            0.21578456 = queryWeight, product of:
              5.8332562 = idf(docFreq=351, maxDocs=44218)
              0.03699213 = queryNorm
            0.18228926 = fieldWeight in 5973, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.8332562 = idf(docFreq=351, maxDocs=44218)
              0.015625 = fieldNorm(doc=5973)
      0.16666667 = coord(1/6)
    
    Footnote
    Im ersten Kapitel "Retrieval-Systeme" werden verschiedene Information RetrievalSysteme präsentiert und Verfahren zu deren Gestaltung diskutiert. Jan-Hendrik Scheufen stellt das Meta-Framework RECOIN zur Information Retrieval Forschung vor, das sich durch eine flexible Handhabung unterschiedlichster Applikationen auszeichnet und dadurch eine zentrierte Protokollierung und Steuerung von Retrieval-Prozessen ermöglicht. Dieses Konzept eines offenen, komponentenbasierten Systems wurde in Form eines Plug-Ins für die javabasierte Open-Source-Plattform Eclipse realisiert. Markus Nick und Klaus-Dieter Althoff erläutern in ihrem Beitrag, der übrigens der einzige englischsprachige Text im Buch ist, das Verfahren DILLEBIS zur Erhaltung und Pflege (Maintenance) von erfahrungsbasierten Informationssystemen. Sie bezeichnen dieses Verfahren als Maintainable Experience-based Information System und plädieren für eine Ausrichtung von erfahrungsbasierten Systemen entsprechend diesem Modell. Gesine Quint und Steffen Weichert stellen dagegen in ihrem Beitrag die benutzerzentrierte Entwicklung des Produkt-Retrieval-Systems EIKON vor, das in Kooperation mit der Blaupunkt GmbH realisiert wurde. In einem iterativen Designzyklus erfolgte die Gestaltung von gruppenspezifischen Interaktionsmöglichkeiten für ein Car-Multimedia-Zubehör-System. Im zweiten Kapitel setzen sich mehrere Autoren dezidierter mit dem Anwendungsgebiet "Digitale Bibliothek" auseinander. Claus-Peter Klas, Sascha Kriewel, Andre Schaefer und Gudrun Fischer von der Universität Duisburg-Essen stellen das System DAFFODIL vor, das durch eine Vielzahl an Werkzeugen zur strategischen Unterstützung bei Literaturrecherchen in digitalen Bibliotheken dient. Zusätzlich ermöglicht die Protokollierung sämtlicher Ereignisse den Einsatz des Systems als Evaluationsplattform. Der Aufsatz von Matthias Meiert erläutert die Implementierung von elektronischen Publikationsprozessen an Hochschulen am Beispiel von Abschlussarbeiten des Studienganges Internationales Informationsmanagement der Universität Hildesheim. Neben Rahmenbedingungen werden sowohl der Ist-Zustand als auch der Soll-Zustand des wissenschaftlichen elektronischen Publizierens in Form von gruppenspezifischen Empfehlungen dargestellt. Daniel Harbig und Rene Schneider beschreiben in ihrem Aufsatz zwei Verfahrensweisen zum maschinellen Erlernen von Ontologien, angewandt am virtuellen Bibliotheksregal MyShelf. Nach der Evaluation dieser beiden Ansätze plädieren die Autoren für ein semi-automatisiertes Verfahren zur Erstellung von Ontologien.
  5. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.00
    0.0044550425 = product of:
      0.026730254 = sum of:
        0.026730254 = product of:
          0.08019076 = sum of:
            0.08019076 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.08019076 = score(doc=402,freq=2.0), product of:
                0.12954013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03699213 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  6. Archuby, C.G.: Interfaces se recuperacion para catalogos en linea con salidas ordenadas por probable relevancia (2000) 0.00
    0.0039734826 = product of:
      0.023840895 = sum of:
        0.023840895 = product of:
          0.07152268 = sum of:
            0.07152268 = weight(_text_:29 in 5727) [ClassicSimilarity], result of:
              0.07152268 = score(doc=5727,freq=4.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.5496386 = fieldWeight in 5727, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5727)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 1.1996 18:23:13
    Source
    Ciencia da informacao. 29(2000) no.3, S.5-13
  7. Crestani, F.: Combination of similarity measures for effective spoken document retrieval (2003) 0.00
    0.003933547 = product of:
      0.023601282 = sum of:
        0.023601282 = product of:
          0.07080384 = sum of:
            0.07080384 = weight(_text_:29 in 4690) [ClassicSimilarity], result of:
              0.07080384 = score(doc=4690,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.5441145 = fieldWeight in 4690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4690)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Journal of information science. 29(2003) no.2, S.87-96
  8. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.00
    0.0038981622 = product of:
      0.023388973 = sum of:
        0.023388973 = product of:
          0.070166916 = sum of:
            0.070166916 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.070166916 = score(doc=2134,freq=2.0), product of:
                0.12954013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03699213 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    30. 3.2001 13:32:22
  9. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.00
    0.0038981622 = product of:
      0.023388973 = sum of:
        0.023388973 = product of:
          0.070166916 = sum of:
            0.070166916 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.070166916 = score(doc=3445,freq=2.0), product of:
                0.12954013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03699213 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    25. 8.2005 17:42:22
  10. Okada, M.; Ando, K.; Lee, S.S.; Hayashi, Y.; Aoe, J.I.: ¬An efficient substring search method by using delayed keyword extraction (2001) 0.00
    0.003371612 = product of:
      0.020229671 = sum of:
        0.020229671 = product of:
          0.06068901 = sum of:
            0.06068901 = weight(_text_:29 in 6415) [ClassicSimilarity], result of:
              0.06068901 = score(doc=6415,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.46638384 = fieldWeight in 6415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6415)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2002 17:24:03
  11. Cole, C.: Intelligent information retrieval: diagnosing information need : Part II: uncertainty expansion in a prototype of a diagnostic IR tool (1998) 0.00
    0.003371612 = product of:
      0.020229671 = sum of:
        0.020229671 = product of:
          0.06068901 = sum of:
            0.06068901 = weight(_text_:29 in 6432) [ClassicSimilarity], result of:
              0.06068901 = score(doc=6432,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.46638384 = fieldWeight in 6432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6432)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    11. 8.2001 14:48:29
  12. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.00
    0.0033412818 = product of:
      0.02004769 = sum of:
        0.02004769 = product of:
          0.060143072 = sum of:
            0.060143072 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.060143072 = score(doc=58,freq=2.0), product of:
                0.12954013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03699213 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    14. 6.2015 22:12:44
  13. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.00
    0.0033412818 = product of:
      0.02004769 = sum of:
        0.02004769 = product of:
          0.060143072 = sum of:
            0.060143072 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.060143072 = score(doc=2051,freq=2.0), product of:
                0.12954013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03699213 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    14. 6.2015 22:12:56
  14. Zhang, W.; Korf, R.E.: Performance of linear-space search algorithms (1995) 0.00
    0.0028096768 = product of:
      0.01685806 = sum of:
        0.01685806 = product of:
          0.05057418 = sum of:
            0.05057418 = weight(_text_:29 in 4744) [ClassicSimilarity], result of:
              0.05057418 = score(doc=4744,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.38865322 = fieldWeight in 4744, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4744)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    2. 8.1996 10:29:15
  15. Hüther, H.: Selix im DFG-Projekt Kascade (1998) 0.00
    0.0028096768 = product of:
      0.01685806 = sum of:
        0.01685806 = product of:
          0.05057418 = sum of:
            0.05057418 = weight(_text_:29 in 5151) [ClassicSimilarity], result of:
              0.05057418 = score(doc=5151,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.38865322 = fieldWeight in 5151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5151)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    25. 8.2000 19:55:29
  16. Uratani, N.; Takeda, M.: ¬A fast string-searching algorithm for multiple patterns (1993) 0.00
    0.0022477414 = product of:
      0.013486448 = sum of:
        0.013486448 = product of:
          0.040459342 = sum of:
            0.040459342 = weight(_text_:29 in 6275) [ClassicSimilarity], result of:
              0.040459342 = score(doc=6275,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.31092256 = fieldWeight in 6275, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6275)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Information processing and management. 29(1993) no.6, S.775-791
  17. Chakrabarti, S.; Dom, B.; Kumar, S.R.; Raghavan, P.; Rajagopalan, S.; Tomkins, A.; Kleinberg, J.M.; Gibson, D.: Neue Pfade durch den Internet-Dschungel : Die zweite Generation von Web-Suchmaschinen (1999) 0.00
    0.0022477414 = product of:
      0.013486448 = sum of:
        0.013486448 = product of:
          0.040459342 = sum of:
            0.040459342 = weight(_text_:29 in 3) [ClassicSimilarity], result of:
              0.040459342 = score(doc=3,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.31092256 = fieldWeight in 3, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    31.12.1996 19:29:41
  18. Thompson, P.: Looking back: on relevance, probabilistic indexing and information retrieval (2008) 0.00
    0.0022477414 = product of:
      0.013486448 = sum of:
        0.013486448 = product of:
          0.040459342 = sum of:
            0.040459342 = weight(_text_:29 in 2074) [ClassicSimilarity], result of:
              0.040459342 = score(doc=2074,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.31092256 = fieldWeight in 2074, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2074)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    31. 7.2008 19:58:29
  19. Vechtomova, O.; Karamuftuoglu, M.: Lexical cohesion and term proximity in document ranking (2008) 0.00
    0.0022477414 = product of:
      0.013486448 = sum of:
        0.013486448 = product of:
          0.040459342 = sum of:
            0.040459342 = weight(_text_:29 in 2101) [ClassicSimilarity], result of:
              0.040459342 = score(doc=2101,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.31092256 = fieldWeight in 2101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2101)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    1. 8.2008 12:29:05
  20. Maylein, L.; Langenstein, A.: Neues vom Relevanz-Ranking im HEIDI-Katalog der Universitätsbibliothek Heidelberg : Perspektiven für bibliothekarische Dienstleistungen (2013) 0.00
    0.0022477414 = product of:
      0.013486448 = sum of:
        0.013486448 = product of:
          0.040459342 = sum of:
            0.040459342 = weight(_text_:29 in 775) [ClassicSimilarity], result of:
              0.040459342 = score(doc=775,freq=2.0), product of:
                0.13012674 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03699213 = queryNorm
                0.31092256 = fieldWeight in 775, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=775)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 6.2013 18:06:23

Years

Languages

  • e 52
  • d 11
  • m 1
  • pt 1
  • More… Less…

Types

  • a 61
  • m 2
  • el 1
  • r 1
  • s 1
  • x 1
  • More… Less…