Search (37 results, page 1 of 2)

  • × theme_ss:"Retrievalalgorithmen"
  1. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.01
    0.009997973 = product of:
      0.06998581 = sum of:
        0.05469865 = weight(_text_:media in 1484) [ClassicSimilarity], result of:
          0.05469865 = score(doc=1484,freq=2.0), product of:
            0.13212246 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.02820796 = queryNorm
            0.41399965 = fieldWeight in 1484, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0625 = fieldNorm(doc=1484)
        0.015287156 = product of:
          0.030574312 = sum of:
            0.030574312 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.030574312 = score(doc=1484,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    13. 9.2014 14:45:22
    Source
    http://www.searchmetrics.com/media/documents/knowledge-base/searchmetrics-ranking-faktoren-studie-2014.pdf
  2. Joss, M.W.; Wszola, S.: ¬The engines that can : text search and retrieval software, their strategies, and vendors (1996) 0.01
    0.009926006 = product of:
      0.06948204 = sum of:
        0.05801668 = weight(_text_:media in 5123) [ClassicSimilarity], result of:
          0.05801668 = score(doc=5123,freq=4.0), product of:
            0.13212246 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.02820796 = queryNorm
            0.43911293 = fieldWeight in 5123, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=5123)
        0.011465367 = product of:
          0.022930734 = sum of:
            0.022930734 = weight(_text_:22 in 5123) [ClassicSimilarity], result of:
              0.022930734 = score(doc=5123,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.23214069 = fieldWeight in 5123, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5123)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Abstract
    Traces the development of text searching and retrieval software designed to cope with the increasing demands made by the storage and handling of large amounts of data, recorded on high data storage media, from CD-ROM to multi gigabyte storage media and online information services, with particular reference to the need to cope with graphics as well as conventional ASCII text. Includes details of: Boolean searching, fuzzy searching and matching; relevance ranking; proximity searching and improved strategies for dealing with text searching in very large databases. Concludes that the best searching tools for CD-ROM publishers are those optimized for searching and retrieval on CD-ROM. CD-ROM drives have relatively lower random seek times than hard discs and so the software most appropriate to the medium is that which can effectively arrange the indexes and text on the CD-ROM to avoid continuous random access searching. Lists and reviews a selection of software packages designed to achieve the sort of results required for rapid CD-ROM searching
    Date
    12. 9.1996 13:56:22
  3. Mandl, T.: Web- und Multimedia-Dokumente : Neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen (2003) 0.00
    0.00403436 = product of:
      0.056481034 = sum of:
        0.056481034 = weight(_text_:daten in 1734) [ClassicSimilarity], result of:
          0.056481034 = score(doc=1734,freq=2.0), product of:
            0.13425784 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.02820796 = queryNorm
            0.42069077 = fieldWeight in 1734, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.0625 = fieldNorm(doc=1734)
      0.071428575 = coord(1/14)
    
    Abstract
    Die Menge an Daten im Internet steigt weiter rapide an. Damit wächst auch der Bedarf an qualitativ hochwertigen Information Retrieval Diensten zur Orientierung und problemorientierten Suche. Die Entscheidung für die Benutzung oder Beschaffung von Information Retrieval Software erfordert aussagekräftige Evaluierungsergebnisse. Dieser Beitrag stellt neuere Entwicklungen bei der Evaluierung von Information Retrieval Systemen vor und zeigt den Trend zu Spezialisierung und Diversifizierung von Evaluierungsstudien, die den Realitätsgrad derErgebnisse erhöhen. DerSchwerpunkt liegt auf dem Retrieval von Fachtexten, Internet-Seiten und Multimedia-Objekten.
  4. Liu, X.; Turtle, H.: Real-time user interest modeling for real-time ranking (2013) 0.00
    0.002930285 = product of:
      0.04102399 = sum of:
        0.04102399 = weight(_text_:media in 1035) [ClassicSimilarity], result of:
          0.04102399 = score(doc=1035,freq=2.0), product of:
            0.13212246 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.02820796 = queryNorm
            0.31049973 = fieldWeight in 1035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=1035)
      0.071428575 = coord(1/14)
    
    Abstract
    User interest as a very dynamic information need is often ignored in most existing information retrieval systems. In this research, we present the results of experiments designed to evaluate the performance of a real-time interest model (RIM) that attempts to identify the dynamic and changing query level interests regarding social media outputs. Unlike most existing ranking methods, our ranking approach targets calculation of the probability that user interest in the content of the document is subject to very dynamic user interest change. We describe 2 formulations of the model (real-time interest vector space and real-time interest language model) stemming from classical relevance ranking methods and develop a novel methodology for evaluating the performance of RIM using Amazon Mechanical Turk to collect (interest-based) relevance judgments on a daily basis. Our results show that the model usually, although not always, performs better than baseline results obtained from commercial web search engines. We identify factors that affect RIM performance and outline plans for future research.
  5. Marcus, S.: Textvergleich mit mehreren Mustern (2005) 0.00
    0.0028527232 = product of:
      0.039938122 = sum of:
        0.039938122 = weight(_text_:daten in 862) [ClassicSimilarity], result of:
          0.039938122 = score(doc=862,freq=4.0), product of:
            0.13425784 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.02820796 = queryNorm
            0.2974733 = fieldWeight in 862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.03125 = fieldNorm(doc=862)
      0.071428575 = coord(1/14)
    
    Abstract
    Das Gebiet des Pattern-Matching besitzt in vielen wissenschaftlichen Bereichen eine hohe Relevanz. Aufgrund unterschiedlicher Einsatzgebiete sind auch Umsetzung und Anwendung des Pattern-Matching sehr verschieden. Die allen Anwendungen des Pattern-Matching inhärente Aufgabe besteht darin, in einer Vielzahl von Eingabedaten bestimmte Muster wieder zu erkennen. Dies ist auch der deutschen Bezeichnung Mustererkennung zu entnehmen. In der Medizin findet Pattern-Matching zum Beispiel bei der Untersuchung von Chromosomensträngen auf bestimmte Folgen von Chromosomen Verwendung. Auf dem Gebiet der Bildverarbeitung können mit Hilfe des Pattern-Matching ganze Bilder verglichen oder einzelne Bildpunkte betrachtet werden, die durch ein Muster identifizierbar sind. Ein weiteres Einsatzgebiet des Pattern-Matching ist das Information-Retrieval, bei dem in gespeicherten Daten nach relevanten Informationen gesucht wird. Die Relevanz der zu suchenden Daten wird auch hier anhand eines Musters, zum Beispiel einem bestimmten Schlagwort, beurteilt. Ein vergleichbares Verfahren findet auch im Internet Anwendung. Internet-Benutzer, die mittels einer Suchmaschine nach bedeutsamen Informationen suchen, erhalten diese durch den Einsatz eines Pattern-Matching-Automaten. Die in diesem Zusammenhang an den Pattern-Matching-Automaten gestellten Anforderungen variieren mit der Suchanfrage, die an eine Suchmaschine gestellt wird. Eine solche Suchanfrage kann im einfachsten Fall aus genau einem Schlüsselwort bestehen. Im komplexeren Fall enthält die Anfrage mehrere Schlüsselwörter. Dabei muss für eine erfolgreiche Suche eine Konkatenation der in der Anfrage enthaltenen Wörter erfolgen. Zu Beginn dieser Arbeit wird in Kapitel 2 eine umfassende Einführung in die Thematik des Textvergleichs gegeben, wobei die Definition einiger grundlegender Begriffe vorgenommen wird. Anschließend werden in Kapitel 3 Verfahren zum Textvergleich mit mehreren Mustern vorgestellt. Dabei wird zunächst ein einfaches Vorgehen erläutert, um einen Einsteig in das Thema des Textvergleichs mit mehreren Mustern zu erleichtern. Danach wird eine komplexe Methode des Textvergleichs vorgestellt und anhand von Beispielen verdeutlicht.
  6. Lanvent, A.: Know-how - Suchverfahren : Intelligente Suchmaschinen erzielen mit assoziativen und linguistischen Verfahren beste Ergebnisse. (2004) 0.00
    0.0025214748 = product of:
      0.035300646 = sum of:
        0.035300646 = weight(_text_:daten in 2988) [ClassicSimilarity], result of:
          0.035300646 = score(doc=2988,freq=2.0), product of:
            0.13425784 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.02820796 = queryNorm
            0.26293173 = fieldWeight in 2988, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2988)
      0.071428575 = coord(1/14)
    
    Footnote
    Teilbeitrag in: Licht im Daten Chaos
  7. Lanvent, A.: Praxis - Windows-Suche und Indexdienst : Auch Windows kann bei der Suche den Turbo einlegen: mit dem Indexdienst (2004) 0.00
    0.0025214748 = product of:
      0.035300646 = sum of:
        0.035300646 = weight(_text_:daten in 3316) [ClassicSimilarity], result of:
          0.035300646 = score(doc=3316,freq=2.0), product of:
            0.13425784 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.02820796 = queryNorm
            0.26293173 = fieldWeight in 3316, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3316)
      0.071428575 = coord(1/14)
    
    Footnote
    Teilbeitrag in: Licht im Daten Chaos
  8. Hoenkamp, E.; Bruza, P.: How everyday language can and will boost effective information retrieval (2015) 0.00
    0.0024419043 = product of:
      0.034186658 = sum of:
        0.034186658 = weight(_text_:media in 2123) [ClassicSimilarity], result of:
          0.034186658 = score(doc=2123,freq=2.0), product of:
            0.13212246 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.02820796 = queryNorm
            0.25874978 = fieldWeight in 2123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2123)
      0.071428575 = coord(1/14)
    
    Abstract
    Typing 2 or 3 keywords into a browser has become an easy and efficient way to find information. Yet, typing even short queries becomes tedious on ever shrinking (virtual) keyboards. Meanwhile, speech processing is maturing rapidly, facilitating everyday language input. Also, wearable technology can inform users proactively by listening in on their conversations or processing their social media interactions. Given these developments, everyday language may soon become the new input of choice. We present an information retrieval (IR) algorithm specifically designed to accept everyday language. It integrates two paradigms of information retrieval, previously studied in isolation; one directed mainly at the surface structure of language, the other primarily at the underlying meaning. The integration was achieved by a Markov machine that encodes meaning by its transition graph, and surface structure by the language it generates. A rigorous evaluation of the approach showed, first, that it can compete with the quality of existing language models, second, that it is more effective the more verbose the input, and third, as a consequence, that it is promising for an imminent transition from keyword input, where the onus is on the user to formulate concise queries, to a modality where users can express more freely, more informal, and more natural their need for information in everyday language.
  9. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.00
    0.0021838795 = product of:
      0.030574312 = sum of:
        0.030574312 = product of:
          0.061148625 = sum of:
            0.061148625 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.061148625 = score(doc=402,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  10. Lanvent, A.: Licht im Daten Chaos (2004) 0.00
    0.00201718 = product of:
      0.028240517 = sum of:
        0.028240517 = weight(_text_:daten in 2806) [ClassicSimilarity], result of:
          0.028240517 = score(doc=2806,freq=2.0), product of:
            0.13425784 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.02820796 = queryNorm
            0.21034539 = fieldWeight in 2806, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.03125 = fieldNorm(doc=2806)
      0.071428575 = coord(1/14)
    
  11. Mayr, P.: Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken (2009) 0.00
    0.00201718 = product of:
      0.028240517 = sum of:
        0.028240517 = weight(_text_:daten in 4302) [ClassicSimilarity], result of:
          0.028240517 = score(doc=4302,freq=2.0), product of:
            0.13425784 = queryWeight, product of:
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.02820796 = queryNorm
            0.21034539 = fieldWeight in 4302, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.759573 = idf(docFreq=1029, maxDocs=44218)
              0.03125 = fieldNorm(doc=4302)
      0.071428575 = coord(1/14)
    
    Abstract
    Trotz großer Dokumentmengen für datenbankübergreifende Literaturrecherchen erwarten akademische Nutzer einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, inzwischen eine entscheidende Rolle beim Design von Suchsystemen. Nutzer erwarten weiterhin flexible Informationssysteme, die es unter anderem zulassen, Einfluss auf das Ranking der Dokumente zu nehmen bzw. alternative Rankingverfahren zu verwenden. In dieser Arbeit werden zwei Mehrwertverfahren für Suchsysteme vorgestellt, die die typischen Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln und damit die Recherchesituation messbar verbessern können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden hier ausführlich beschrieben und im empirischen Teil der Arbeit bzgl. der Effektivität für typische fachbezogene Recherchen evaluiert. Vorrangiges Ziel der Promotion ist es, zu untersuchen, ob das hier vorgestellte alternative Re-Rankingverfahren Bradfordizing im Anwendungsbereich bibliographischer Datenbanken zum einen operabel ist und zum anderen voraussichtlich gewinnbringend in Informationssystemen eingesetzt und dem Nutzer angeboten werden kann. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben wissenschaftlichen Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Evaluation der Crosskonkordanzen (insgesamt 82 Fragestellungen) zeigt, dass sich die Retrievalergebnisse signifikant für alle Crosskonkordanzen verbessern; es zeigt sich zudem, dass interdisziplinäre Crosskonkordanzen den stärksten (positiven) Effekt auf die Suchergebnisse haben. Die Evaluation des Re-Ranking nach Bradfordizing (insgesamt 164 Fragestellungen) zeigt, dass die Dokumente der Kernzone (Kernzeitschriften) für die meisten Testreihen eine signifikant höhere Precision als Dokumente der Zone 2 und Zone 3 (Peripheriezeitschriften) ergeben. Sowohl für Zeitschriften als auch für Monographien kann dieser Relevanzvorteil nach Bradfordizing auf einer sehr breiten Basis von Themen und Fragestellungen an zwei unabhängigen Dokumentkorpora empirisch nachgewiesen werden.
  12. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.00
    0.0019108946 = product of:
      0.026752524 = sum of:
        0.026752524 = product of:
          0.05350505 = sum of:
            0.05350505 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.05350505 = score(doc=2134,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    30. 3.2001 13:32:22
  13. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.00
    0.0019108946 = product of:
      0.026752524 = sum of:
        0.026752524 = product of:
          0.05350505 = sum of:
            0.05350505 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.05350505 = score(doc=3445,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    25. 8.2005 17:42:22
  14. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.00
    0.0016379097 = product of:
      0.022930734 = sum of:
        0.022930734 = product of:
          0.045861468 = sum of:
            0.045861468 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.045861468 = score(doc=58,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    14. 6.2015 22:12:44
  15. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.00
    0.0016379097 = product of:
      0.022930734 = sum of:
        0.022930734 = product of:
          0.045861468 = sum of:
            0.045861468 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.045861468 = score(doc=2051,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    14. 6.2015 22:12:56
  16. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.00
    0.0010919397 = product of:
      0.015287156 = sum of:
        0.015287156 = product of:
          0.030574312 = sum of:
            0.030574312 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.030574312 = score(doc=5108,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    20. 1.2007 18:30:22
  17. Faloutsos, C.: Signature files (1992) 0.00
    0.0010919397 = product of:
      0.015287156 = sum of:
        0.015287156 = product of:
          0.030574312 = sum of:
            0.030574312 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.030574312 = score(doc=3499,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    7. 5.1999 15:22:48
  18. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.00
    0.0010919397 = product of:
      0.015287156 = sum of:
        0.015287156 = product of:
          0.030574312 = sum of:
            0.030574312 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.030574312 = score(doc=1422,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 3.2003 19:27:23
  19. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.00
    0.0010919397 = product of:
      0.015287156 = sum of:
        0.015287156 = product of:
          0.030574312 = sum of:
            0.030574312 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.030574312 = score(doc=1431,freq=2.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 8.2014 17:05:18
  20. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.00
    9.651475E-4 = product of:
      0.013512065 = sum of:
        0.013512065 = product of:
          0.02702413 = sum of:
            0.02702413 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.02702413 = score(doc=2591,freq=4.0), product of:
                0.09877947 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02820796 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56

Years

Languages

  • e 27
  • d 10

Types

  • a 33
  • x 2
  • m 1
  • r 1
  • More… Less…