Search (116 results, page 1 of 6)

  • × theme_ss:"Retrievalalgorithmen"
  1. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.13
    0.12672135 = product of:
      0.2534427 = sum of:
        0.21170299 = weight(_text_:van in 2134) [ClassicSimilarity], result of:
          0.21170299 = score(doc=2134,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.86258465 = fieldWeight in 2134, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.109375 = fieldNorm(doc=2134)
        0.041739732 = product of:
          0.083479464 = sum of:
            0.083479464 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.083479464 = score(doc=2134,freq=2.0), product of:
                0.15411738 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044010527 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    30. 3.2001 13:32:22
  2. Van der Veer Martens, B.; Fleet, C. van: Opening the black box of "relevance work" : a domain analysis (2012) 0.07
    0.06779509 = product of:
      0.13559018 = sum of:
        0.1283114 = weight(_text_:van in 247) [ClassicSimilarity], result of:
          0.1283114 = score(doc=247,freq=4.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.5228053 = fieldWeight in 247, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=247)
        0.0072787786 = product of:
          0.014557557 = sum of:
            0.014557557 = weight(_text_:der in 247) [ClassicSimilarity], result of:
              0.014557557 = score(doc=247,freq=2.0), product of:
                0.098309256 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.044010527 = queryNorm
                0.14807922 = fieldWeight in 247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.046875 = fieldNorm(doc=247)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
  3. Rijsbergen, C.J. van: ¬A fast hierarchic clustering algorithm (1970) 0.06
    0.06048657 = product of:
      0.24194628 = sum of:
        0.24194628 = weight(_text_:van in 3300) [ClassicSimilarity], result of:
          0.24194628 = score(doc=3300,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.98581105 = fieldWeight in 3300, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.125 = fieldNorm(doc=3300)
      0.25 = coord(1/4)
    
  4. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.06
    0.05523593 = product of:
      0.11047186 = sum of:
        0.06873213 = weight(_text_:j in 3445) [ClassicSimilarity], result of:
          0.06873213 = score(doc=3445,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.4914939 = fieldWeight in 3445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.109375 = fieldNorm(doc=3445)
        0.041739732 = product of:
          0.083479464 = sum of:
            0.083479464 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.083479464 = score(doc=3445,freq=2.0), product of:
                0.15411738 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044010527 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    25. 8.2005 17:42:22
  5. Crestani, F.; Dominich, S.; Lalmas, M.; Rijsbergen, C.J.K. van: Mathematical, logical, and formal methods in information retrieval : an introduction to the special issue (2003) 0.05
    0.054309156 = product of:
      0.10861831 = sum of:
        0.090729855 = weight(_text_:van in 1451) [ClassicSimilarity], result of:
          0.090729855 = score(doc=1451,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.36967915 = fieldWeight in 1451, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=1451)
        0.017888457 = product of:
          0.035776913 = sum of:
            0.035776913 = weight(_text_:22 in 1451) [ClassicSimilarity], result of:
              0.035776913 = score(doc=1451,freq=2.0), product of:
                0.15411738 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044010527 = queryNorm
                0.23214069 = fieldWeight in 1451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 3.2003 19:27:36
  6. Wiggers, G.; Verberne, S.; Loon, W. van; Zwenne, G.-J.: Bibliometric-enhanced legal information retrieval : combining usage and citations as flavors of impact relevance (2023) 0.05
    0.0500777 = product of:
      0.1001554 = sum of:
        0.024547188 = weight(_text_:j in 1022) [ClassicSimilarity], result of:
          0.024547188 = score(doc=1022,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.17553353 = fieldWeight in 1022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1022)
        0.07560821 = weight(_text_:van in 1022) [ClassicSimilarity], result of:
          0.07560821 = score(doc=1022,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.30806595 = fieldWeight in 1022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1022)
      0.5 = coord(2/4)
    
  7. Qi, Q.; Hessen, D.J.; Heijden, P.G.M. van der: Improving information retrieval through correspondenceanalysis instead of latent semantic analysis (2023) 0.05
    0.049004316 = product of:
      0.09800863 = sum of:
        0.090729855 = weight(_text_:van in 1045) [ClassicSimilarity], result of:
          0.090729855 = score(doc=1045,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.36967915 = fieldWeight in 1045, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=1045)
        0.0072787786 = product of:
          0.014557557 = sum of:
            0.014557557 = weight(_text_:der in 1045) [ClassicSimilarity], result of:
              0.014557557 = score(doc=1045,freq=2.0), product of:
                0.098309256 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.044010527 = queryNorm
                0.14807922 = fieldWeight in 1045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1045)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
  8. Sembok, T.M.T.; Rijsbergen, C.J. van: IMAGING: a relevant feedback retrieval with nearest neighbour clusters (1994) 0.03
    0.030243285 = product of:
      0.12097314 = sum of:
        0.12097314 = weight(_text_:van in 1071) [ClassicSimilarity], result of:
          0.12097314 = score(doc=1071,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.49290553 = fieldWeight in 1071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0625 = fieldNorm(doc=1071)
      0.25 = coord(1/4)
    
  9. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.03
    0.028182205 = product of:
      0.11272882 = sum of:
        0.11272882 = sum of:
          0.041174993 = weight(_text_:der in 2051) [ClassicSimilarity], result of:
            0.041174993 = score(doc=2051,freq=4.0), product of:
              0.098309256 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.044010527 = queryNorm
              0.4188313 = fieldWeight in 2051, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
          0.07155383 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
            0.07155383 = score(doc=2051,freq=2.0), product of:
              0.15411738 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044010527 = queryNorm
              0.46428138 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
      0.25 = coord(1/4)
    
    Abstract
    Der Beitrag enthält eine Darstellung zur Frage der Konzeption von Rankingalgorithmen auf Grundlage gewichteter Indexierung mittels statistischer Verfahren.
    Date
    14. 6.2015 22:12:56
  10. Walz, J.: Analyse der Übertragbarkeit allgemeiner Rankingfaktoren von Web-Suchmaschinen auf Discovery-Systeme (2018) 0.03
    0.025022063 = product of:
      0.050044127 = sum of:
        0.029456628 = weight(_text_:j in 5744) [ClassicSimilarity], result of:
          0.029456628 = score(doc=5744,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.21064025 = fieldWeight in 5744, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=5744)
        0.020587496 = product of:
          0.041174993 = sum of:
            0.041174993 = weight(_text_:der in 5744) [ClassicSimilarity], result of:
              0.041174993 = score(doc=5744,freq=16.0), product of:
                0.098309256 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.044010527 = queryNorm
                0.4188313 = fieldWeight in 5744, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5744)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ziel: Ziel dieser Bachelorarbeit war es, die Übertragbarkeit der allgemeinen Rankingfaktoren, wie sie von Web-Suchmaschinen verwendet werden, auf Discovery-Systeme zu analysieren. Dadurch könnte das bisher hauptsächlich auf dem textuellen Abgleich zwischen Suchanfrage und Dokumenten basierende bibliothekarische Ranking verbessert werden. Methode: Hierfür wurden Faktoren aus den Gruppen Popularität, Aktualität, Lokalität, Technische Faktoren, sowie dem personalisierten Ranking diskutiert. Die entsprechenden Rankingfaktoren wurden nach ihrer Vorkommenshäufigkeit in der analysierten Literatur und der daraus abgeleiteten Wichtigkeit, ausgewählt. Ergebnis: Von den 23 untersuchten Rankingfaktoren sind 14 (61 %) direkt vom Ranking der Web-Suchmaschinen auf das Ranking der Discovery-Systeme übertragbar. Zu diesen zählen unter anderem das Klickverhalten, das Erstellungsdatum, der Nutzerstandort, sowie die Sprache. Sechs (26%) der untersuchten Faktoren sind dagegen nicht übertragbar (z.B. Aktualisierungsfrequenz und Ladegeschwindigkeit). Die Linktopologie, die Nutzungshäufigkeit, sowie die Aktualisierungsfrequenz sind mit entsprechenden Modifikationen übertragbar.
  11. Furner, J.: ¬A unifying model of document relatedness for hybrid search engines (2003) 0.02
    0.023672543 = product of:
      0.047345087 = sum of:
        0.029456628 = weight(_text_:j in 2717) [ClassicSimilarity], result of:
          0.029456628 = score(doc=2717,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.21064025 = fieldWeight in 2717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.046875 = fieldNorm(doc=2717)
        0.017888457 = product of:
          0.035776913 = sum of:
            0.035776913 = weight(_text_:22 in 2717) [ClassicSimilarity], result of:
              0.035776913 = score(doc=2717,freq=2.0), product of:
                0.15411738 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044010527 = queryNorm
                0.23214069 = fieldWeight in 2717, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2717)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    11. 9.2004 17:32:22
  12. Oberhauser, O.; Labner, J.: Relevance Ranking in Online-Katalogen : Informationsstand und Perspektiven (2003) 0.02
    0.02318772 = product of:
      0.04637544 = sum of:
        0.034366064 = weight(_text_:j in 2188) [ClassicSimilarity], result of:
          0.034366064 = score(doc=2188,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.24574696 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2188)
        0.012009373 = product of:
          0.024018746 = sum of:
            0.024018746 = weight(_text_:der in 2188) [ClassicSimilarity], result of:
              0.024018746 = score(doc=2188,freq=4.0), product of:
                0.098309256 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.044010527 = queryNorm
                0.24431825 = fieldWeight in 2188, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2188)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Bekanntlich führen Suchmaschinen wie Google &Co. beider Auflistung der Suchergebnisse ein "Ranking" nach "Relevanz" durch, d.h. die Dokumente werden in absteigender Reihenfolge entsprechend ihrer Erfüllung von Relevanzkriterien ausgeben. In Online-Katalogen (OPACs) ist derlei noch nicht allgemein übliche Praxis, doch bietet etwa das im Österreichischen Bibliothekenverbund eingesetzte System Aleph 500 tatsächlich eine solche Ranking-Option an (die im Verbundkatalog auch implementiert ist). Bislang liegen allerdings kaum Informationen zur Funktionsweise dieses Features, insbesondere auch im Hinblick auf eine Hilfestellung für Benutzer, vor. Daher möchten wir mit diesem Beitrag versuchen, den in unserem Verbund bestehenden Informationsstand zum Thema "Relevance Ranking" zu erweitern. Sowohl die Verwendung einer Ranking-Option in OPACs generell als auch die sich unter Aleph 500 konkret bietenden Möglichkeiten sollen im folgenden näher betrachtet werden.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 56(2003) H.3/4, S.49-63
  13. ¬An introduction to information retrieval (o.J.) 0.02
    0.022682464 = product of:
      0.090729855 = sum of:
        0.090729855 = weight(_text_:van in 4533) [ClassicSimilarity], result of:
          0.090729855 = score(doc=4533,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.36967915 = fieldWeight in 4533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.046875 = fieldNorm(doc=4533)
      0.25 = coord(1/4)
    
    Abstract
    In the beginning IR was dominated by Boolean retrieval, described in the next section. This could be called the antediluvian period, or generation zero. The first generation of IR research dates from the early sixties, and was dominated by model building, experimentation, and heuristics. The big names were Gerry Salton and Karen Sparck Jones. The second period, which began in the mid-seventies, saw a big shift towards mathematics, and a rise of the IR model based upon probability theory - probabilistic IR. The big name here was, and continues to be, Stephen Robertson. More recently Keith van Rijsbergen has led a group that has developed underlying logical models of IR, but interesting as this new work is, it has not as yet led to results that offer improvements for the IR system builder. Xapian is firmly placed as a system that implements, or tries to implement, the probabilistic IR model. (We say 'tries' because sometimes implementation efficiency and theoretical complexity demand certain short-cuts.)
  14. Ackermann, J.: Knuth-Morris-Pratt (2005) 0.02
    0.019215738 = product of:
      0.038431477 = sum of:
        0.01963775 = weight(_text_:j in 865) [ClassicSimilarity], result of:
          0.01963775 = score(doc=865,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.14042683 = fieldWeight in 865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=865)
        0.018793726 = product of:
          0.037587453 = sum of:
            0.037587453 = weight(_text_:der in 865) [ClassicSimilarity], result of:
              0.037587453 = score(doc=865,freq=30.0), product of:
                0.098309256 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.044010527 = queryNorm
                0.3823389 = fieldWeight in 865, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.03125 = fieldNorm(doc=865)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Im Rahmen des Seminars Suchmaschinen und Suchalgorithmen beschäftigt sich diese Arbeit mit dem Auffinden bestimmter Wörter oder Muster in Texten. Der Begriff "Text" wird hier in einem sehr allgemeinen Sinne als strukturierte Folge beliebiger Länge von Zeichen aus einem endlichen Alphabet verstanden. Somit fällt unter diesen Bereich ganz allgemein die Suche nach einem Muster in einer Sequenz von Zeichen. Beispiele hierfür sind neben der Suche von Wörtern in "literarischen" Texten, z.B. das Finden von Pixelfolgen in Bildern oder gar das Finden von Mustern in DNS-Strängen. Das Anwendungsgebiet für eine solche Suche ist weit gefächert. Man denke hier allein an Texteditoren, Literaturdatenbanken, digitale Lexika oder die besagte DNADatenbank. Betrachtet man allein das 1989 publizierte Oxford English Dictionary mit seinen etwa 616500 definierten Stichworten auf gedruckten 21728 Seiten, so gilt es, einen möglichst effizienten Algorithmus für die Suche in Texten zu nutzen. Der in der Arbeit zugrunde liegende Datentyp ist vom Typ String (Zeichenkette), wobei hier offen gelassen wird, wie der Datentyp programmtechnisch realisiert wird. Algorithmen zur Verarbeitung von Zeichenketten (string processing) umfassen ein bestimmtes Spektrum an Anwendungsgebieten [Ot96, S.617 f.], wie z.B. das Komprimieren, das Verschlüssen, das Analysieren (parsen), das Übersetzen von Texten sowie das Suchen in Texten, welches Thema dieses Seminars ist. Im Rahmen dieser Arbeit wird der Knuth-Morris-Pratt Algorithmus vorgestellt, der wie der ebenfalls in diesem Seminar vorgestellte Boyer-Moore Algorithmus einen effizienten Suchalgorithmus darstellt. Dabei soll ein gegebenes Suchwort oder Muster (pattern) in einer gegeben Zeichenkette erkannt werden (pattern matching). Gesucht werden dabei ein oder mehrere Vorkommen eines bestimmten Suchwortes (exact pattern matching). Der Knuth-Morris-Pratt Algorithmus wurde erstmals 1974 als Institutbericht der Stanford University beschrieben und erschien 1977 in der Fachzeitschrift Journal of Computing unter dem Titel "Fast Pattern Matching in Strings" [Kn77]. Der Algorithmus beschreibt eine Suche in Zeichenketten mit linearer Laufzeit. Der Name des Algorithmus setzt sich aus den Entwicklern des Algorithmus Donald E. Knuth, James H. Morris und Vaughan R. Pratt zusammen.
    Content
    Ausarbeitung im Rahmen des Seminars Suchmaschinen und Suchalgorithmen, Institut für Wirtschaftsinformatik Praktische Informatik in der Wirtschaft, Westfälische Wilhelms-Universität Münster. - Vgl.: http://www-wi.uni-muenster.de/pi/lehre/ss05/seminarSuchen/Ausarbeitungen/JanAckermann.pdf
    Imprint
    Münster : Institut für Wirtschaftsinformatik der Westfälische Wilhelms-Universität Münster
  15. Ruthven, T.; Lalmas, M.; Rijsbergen, K.van: Incorporating user research behavior into relevance feedback (2003) 0.02
    0.018902052 = product of:
      0.07560821 = sum of:
        0.07560821 = weight(_text_:van in 5169) [ClassicSimilarity], result of:
          0.07560821 = score(doc=5169,freq=2.0), product of:
            0.24542865 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.044010527 = queryNorm
            0.30806595 = fieldWeight in 5169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5169)
      0.25 = coord(1/4)
    
    Abstract
    Ruthven, Mounia, and van Rijsbergen rank and select terms for query expansion using information gathered on searcher evaluation behavior. Using the TREC Financial Times and Los Angeles Times collections and search topics from TREC-6 placed in simulated work situations, six student subjects each preformed three searches on an experimental system and three on a control system with instructions to search by natural language expression in any way they found comfortable. Searching was analyzed for behavior differences between experimental and control situations, and for effectiveness and perceptions. In three experiments paired t-tests were the analysis tool with controls being a no relevance feedback system, a standard ranking for automatic expansion system, and a standard ranking for interactive expansion while the experimental systems based ranking upon user information on temporal relevance and partial relevance. Two further experiments compare using user behavior (number assessed relevant and similarity of relevant documents) to choose a query expansion technique against a non-selective technique and finally the effect of providing the user with knowledge of the process. When partial relevance data and time of assessment data are incorporated in term ranking more relevant documents were recovered in fewer iterations, however retrieval effectiveness overall was not improved. The subjects, none-the-less, rated the suggested terms as more useful and used them more heavily. Explanations of what the feedback techniques were doing led to higher use of the techniques.
  16. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.02
    0.018788137 = product of:
      0.075152546 = sum of:
        0.075152546 = sum of:
          0.027449993 = weight(_text_:der in 1484) [ClassicSimilarity], result of:
            0.027449993 = score(doc=1484,freq=4.0), product of:
              0.098309256 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.044010527 = queryNorm
              0.27922085 = fieldWeight in 1484, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0625 = fieldNorm(doc=1484)
          0.047702555 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
            0.047702555 = score(doc=1484,freq=2.0), product of:
              0.15411738 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044010527 = queryNorm
              0.30952093 = fieldWeight in 1484, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1484)
      0.25 = coord(1/4)
    
    Abstract
    Dieses Whitepaper beschäftigt sich mit der Definition und Bewertung von Faktoren, die eine hohe Rangkorrelation-Koeffizienz mit organischen Suchergebnissen aufweisen und dient dem Zweck der tieferen Analyse von Suchmaschinen-Algorithmen. Die Datenerhebung samt Auswertung bezieht sich auf Ranking-Faktoren für Google-Deutschland im Jahr 2014. Zusätzlich wurden die Korrelationen und Faktoren unter anderem anhand von Durchschnitts- und Medianwerten sowie Entwicklungstendenzen zu den Vorjahren hinsichtlich ihrer Relevanz für vordere Suchergebnis-Positionen interpretiert.
    Date
    13. 9.2014 14:45:22
  17. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.02
    0.017789142 = product of:
      0.07115657 = sum of:
        0.07115657 = sum of:
          0.029416837 = weight(_text_:der in 3276) [ClassicSimilarity], result of:
            0.029416837 = score(doc=3276,freq=6.0), product of:
              0.098309256 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.044010527 = queryNorm
              0.29922754 = fieldWeight in 3276, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
          0.041739732 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
            0.041739732 = score(doc=3276,freq=2.0), product of:
              0.15411738 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044010527 = queryNorm
              0.2708308 = fieldWeight in 3276, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3276)
      0.25 = coord(1/4)
    
    Abstract
    Im Rahmen des klassischen Information Retrieval wurden verschiedene Verfahren für das Ranking sowie die Suche in einer homogenen strukturlosen Dokumentenmenge entwickelt. Die Erfolge der Suchmaschine Google haben gezeigt dass die Suche in einer zwar inhomogenen aber zusammenhängenden Dokumentenmenge wie dem Internet unter Berücksichtigung der Dokumentenverbindungen (Links) sehr effektiv sein kann. Unter den von der Suchmaschine Google realisierten Konzepten ist ein Verfahren zum Ranking von Suchergebnissen (PageRank), das in diesem Artikel kurz erklärt wird. Darüber hinaus wird auf die Konzepte eines Systems namens CiteSeer eingegangen, welches automatisch bibliographische Angaben indexiert (engl. Autonomous Citation Indexing, ACI). Letzteres erzeugt aus einer Menge von nicht vernetzten wissenschaftlichen Dokumenten eine zusammenhängende Dokumentenmenge und ermöglicht den Einsatz von Banking-Verfahren, die auf den von Google genutzten Verfahren basieren.
    Date
    20. 3.2005 16:23:22
  18. Daniowicz, C.; Baliski, J.: Document ranking based upon Markov chains (2001) 0.02
    0.017183032 = product of:
      0.06873213 = sum of:
        0.06873213 = weight(_text_:j in 5388) [ClassicSimilarity], result of:
          0.06873213 = score(doc=5388,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.4914939 = fieldWeight in 5388, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.109375 = fieldNorm(doc=5388)
      0.25 = coord(1/4)
    
  19. Savoy, J.; Ndarugendamwo, M.; Vrajitoru, D.: Report on the TREC-4 experiment : combining probabilistic and vector-space schemes (1996) 0.01
    0.014728314 = product of:
      0.058913257 = sum of:
        0.058913257 = weight(_text_:j in 7574) [ClassicSimilarity], result of:
          0.058913257 = score(doc=7574,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.4212805 = fieldWeight in 7574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.09375 = fieldNorm(doc=7574)
      0.25 = coord(1/4)
    
  20. Belkin, N.J.; Cool, C.; Koenemann, J.; Ng, K.B.; Park, S.: Using relevance feedback and ranking in interactive searching (1996) 0.01
    0.014728314 = product of:
      0.058913257 = sum of:
        0.058913257 = weight(_text_:j in 7588) [ClassicSimilarity], result of:
          0.058913257 = score(doc=7588,freq=2.0), product of:
            0.1398433 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.044010527 = queryNorm
            0.4212805 = fieldWeight in 7588, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.09375 = fieldNorm(doc=7588)
      0.25 = coord(1/4)
    

Years

Languages

  • e 78
  • d 36
  • m 1
  • More… Less…

Types

  • a 101
  • x 7
  • m 4
  • el 3
  • r 2
  • s 1
  • More… Less…