Search (50 results, page 1 of 3)

  • × theme_ss:"Retrievalalgorithmen"
  1. Fan, W.; Fox, E.A.; Pathak, P.; Wu, H.: ¬The effects of fitness functions an genetic programming-based ranking discovery for Web search (2004) 0.02
    0.016787468 = product of:
      0.033574935 = sum of:
        0.019631049 = product of:
          0.078524195 = sum of:
            0.078524195 = weight(_text_:learning in 2239) [ClassicSimilarity], result of:
              0.078524195 = score(doc=2239,freq=6.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.51265645 = fieldWeight in 2239, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2239)
          0.25 = coord(1/4)
        0.013943886 = product of:
          0.027887773 = sum of:
            0.027887773 = weight(_text_:22 in 2239) [ClassicSimilarity], result of:
              0.027887773 = score(doc=2239,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.23214069 = fieldWeight in 2239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2239)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Genetic-based evolutionary learning algorithms, such as genetic algorithms (GAs) and genetic programming (GP), have been applied to information retrieval (IR) since the 1980s. Recently, GP has been applied to a new IR taskdiscovery of ranking functions for Web search-and has achieved very promising results. However, in our prior research, only one fitness function has been used for GP-based learning. It is unclear how other fitness functions may affect ranking function discovery for Web search, especially since it is weIl known that choosing a proper fitness function is very important for the effectiveness and efficiency of evolutionary algorithms. In this article, we report our experience in contrasting different fitness function designs an GP-based learning using a very large Web corpus. Our results indicate that the design of fitness functions is instrumental in performance improvement. We also give recommendations an the design of fitness functions for genetic-based information retrieval experiments.
    Date
    31. 5.2004 19:22:06
  2. Effektive Information Retrieval Verfahren in Theorie und Praxis : ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005 (2006) 0.01
    0.011086329 = product of:
      0.022172658 = sum of:
        0.0053428947 = product of:
          0.021371579 = sum of:
            0.021371579 = weight(_text_:learning in 5973) [ClassicSimilarity], result of:
              0.021371579 = score(doc=5973,freq=4.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.13952741 = fieldWeight in 5973, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5973)
          0.25 = coord(1/4)
        0.016829763 = product of:
          0.033659525 = sum of:
            0.033659525 = weight(_text_:lernen in 5973) [ClassicSimilarity], result of:
              0.033659525 = score(doc=5973,freq=4.0), product of:
                0.19222628 = queryWeight, product of:
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.0343058 = queryNorm
                0.17510366 = fieldWeight in 5973, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.6033173 = idf(docFreq=442, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5973)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    Inhalt: Jan-Hendrik Scheufen: RECOIN: Modell offener Schnittstellen für Information-Retrieval-Systeme und -Komponenten Markus Nick, Klaus-Dieter Althoff: Designing Maintainable Experience-based Information Systems Gesine Quint, Steffen Weichert: Die benutzerzentrierte Entwicklung des Produkt- Retrieval-Systems EIKON der Blaupunkt GmbH Claus-Peter Klas, Sascha Kriewel, André Schaefer, Gudrun Fischer: Das DAFFODIL System - Strategische Literaturrecherche in Digitalen Bibliotheken Matthias Meiert: Entwicklung eines Modells zur Integration digitaler Dokumente in die Universitätsbibliothek Hildesheim Daniel Harbig, René Schneider: Ontology Learning im Rahmen von MyShelf Michael Kluck, Marco Winter: Topic-Entwicklung und Relevanzbewertung bei GIRT: ein Werkstattbericht Thomas Mandl: Neue Entwicklungen bei den Evaluierungsinitiativen im Information Retrieval Joachim Pfister: Clustering von Patent-Dokumenten am Beispiel der Datenbanken des Fachinformationszentrums Karlsruhe Ralph Kölle, Glenn Langemeier, Wolfgang Semar: Programmieren lernen in kollaborativen Lernumgebungen Olga Tartakovski, Margaryta Shramko: Implementierung eines Werkzeugs zur Sprachidentifikation in mono- und multilingualen Texten Nina Kummer: Indexierungstechniken für das japanische Retrieval Suriya Na Nhongkai, Hans-Joachim Bentz: Bilinguale Suche mittels Konzeptnetzen Robert Strötgen, Thomas Mandl, René Schneider: Entwicklung und Evaluierung eines Question Answering Systems im Rahmen des Cross Language Evaluation Forum (CLEF) Niels Jensen: Evaluierung von mehrsprachigem Web-Retrieval: Experimente mit dem EuroGOV-Korpus im Rahmen des Cross Language Evaluation Forum (CLEF)
    Footnote
    "Evaluierung", das Thema des dritten Kapitels, ist in seiner Breite nicht auf das Information Retrieval beschränkt sondern beinhaltet ebenso einzelne Aspekte der Bereiche Mensch-Maschine-Interaktion sowie des E-Learning. Michael Muck und Marco Winter von der Stiftung Wissenschaft und Politik sowie dem Informationszentrum Sozialwissenschaften thematisieren in ihrem Beitrag den Einfluss der Fragestellung (Topic) auf die Bewertung von Relevanz und zeigen Verfahrensweisen für die Topic-Erstellung auf, die beim Cross Language Evaluation Forum (CLEF) Anwendung finden. Im darauf folgenden Aufsatz stellt Thomas Mandl verschiedene Evaluierungsinitiativen im Information Retrieval und aktuelle Entwicklungen dar. Joachim Pfister erläutert in seinem Beitrag das automatisierte Gruppieren, das sogenannte Clustering, von Patent-Dokumenten in den Datenbanken des Fachinformationszentrums Karlsruhe und evaluiert unterschiedliche Clusterverfahren auf Basis von Nutzerbewertungen. Ralph Kölle, Glenn Langemeier und Wolfgang Semar widmen sich dem kollaborativen Lernen unter den speziellen Bedingungen des Programmierens. Dabei werden das System VitaminL zur synchronen Bearbeitung von Programmieraufgaben und das Kennzahlensystem K-3 für die Bewertung kollaborativer Zusammenarbeit in einer Lehrveranstaltung angewendet. Der aktuelle Forschungsschwerpunkt der Hildesheimer Informationswissenschaft zeichnet sich im vierten Kapitel unter dem Thema "Multilinguale Systeme" ab. Hier finden sich die meisten Beiträge des Tagungsbandes wieder. Olga Tartakovski und Margaryta Shramko beschreiben und prüfen das System Langldent, das die Sprache von mono- und multilingualen Texten identifiziert. Die Eigenheiten der japanischen Schriftzeichen stellt Nina Kummer dar und vergleicht experimentell die unterschiedlichen Techniken der Indexierung. Suriya Na Nhongkai und Hans-Joachim Bentz präsentieren und prüfen eine bilinguale Suche auf Basis von Konzeptnetzen, wobei die Konzeptstruktur das verbindende Elemente der beiden Textsammlungen darstellt. Das Entwickeln und Evaluieren eines mehrsprachigen Question-Answering-Systems im Rahmen des Cross Language Evaluation Forum (CLEF), das die alltagssprachliche Formulierung von konkreten Fragestellungen ermöglicht, wird im Beitrag von Robert Strötgen, Thomas Mandl und Rene Schneider thematisiert. Den Schluss bildet der Aufsatz von Niels Jensen, der ein mehrsprachiges Web-Retrieval-System ebenfalls im Zusammenhang mit dem CLEF anhand des multilingualen EuroGOVKorpus evaluiert.
  3. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.009295925 = product of:
      0.0371837 = sum of:
        0.0371837 = product of:
          0.0743674 = sum of:
            0.0743674 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.0743674 = score(doc=402,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  4. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.06507147 = score(doc=2134,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:32:22
  5. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.01
    0.008133934 = product of:
      0.032535736 = sum of:
        0.032535736 = product of:
          0.06507147 = sum of:
            0.06507147 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.06507147 = score(doc=3445,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    25. 8.2005 17:42:22
  6. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.01
    0.006971943 = product of:
      0.027887773 = sum of:
        0.027887773 = product of:
          0.055775546 = sum of:
            0.055775546 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.055775546 = score(doc=58,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:44
  7. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.01
    0.006971943 = product of:
      0.027887773 = sum of:
        0.027887773 = product of:
          0.055775546 = sum of:
            0.055775546 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
              0.055775546 = score(doc=2051,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.46428138 = fieldWeight in 2051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2051)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    14. 6.2015 22:12:56
  8. Li, M.; Li, H.; Zhou, Z.-H.: Semi-supervised document retrieval (2009) 0.01
    0.0062472755 = product of:
      0.024989102 = sum of:
        0.024989102 = product of:
          0.09995641 = sum of:
            0.09995641 = weight(_text_:learning in 4218) [ClassicSimilarity], result of:
              0.09995641 = score(doc=4218,freq=14.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.6525797 = fieldWeight in 4218, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4218)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This paper proposes a new machine learning method for constructing ranking models in document retrieval. The method, which is referred to as SSRank, aims to use the advantages of both the traditional Information Retrieval (IR) methods and the supervised learning methods for IR proposed recently. The advantages include the use of limited amount of labeled data and rich model representation. To do so, the method adopts a semi-supervised learning framework in ranking model construction. Specifically, given a small number of labeled documents with respect to some queries, the method effectively labels the unlabeled documents for the queries. It then uses all the labeled data to train a machine learning model (in our case, Neural Network). In the data labeling, the method also makes use of a traditional IR model (in our case, BM25). A stopping criterion based on machine learning theory is given for the data labeling process. Experimental results on three benchmark datasets and one web search dataset indicate that SSRank consistently and almost always significantly outperforms the baseline methods (unsupervised and supervised learning methods), given the same amount of labeled data. This is because SSRank can effectively leverage the use of unlabeled data in learning.
  9. Chen, Z.; Fu, B.: On the complexity of Rocchio's similarity-based relevance feedback algorithm (2007) 0.01
    0.005783853 = product of:
      0.023135412 = sum of:
        0.023135412 = product of:
          0.09254165 = sum of:
            0.09254165 = weight(_text_:learning in 578) [ClassicSimilarity], result of:
              0.09254165 = score(doc=578,freq=12.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.6041714 = fieldWeight in 578, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=578)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Rocchio's similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive learning algorithm from examples in searching for documents represented by a linear classifier. Despite its popularity in various applications, there is little rigorous analysis of its learning complexity in literature. In this article, the authors prove for the first time that the learning complexity of Rocchio's algorithm is O(d + d**2(log d + log n)) over the discretized vector space {0, ... , n - 1 }**d when the inner product similarity measure is used. The upper bound on the learning complexity for searching for documents represented by a monotone linear classifier (q, 0) over {0, ... , n - 1 }d can be improved to, at most, 1 + 2k (n - 1) (log d + log(n - 1)), where k is the number of nonzero components in q. Several lower bounds on the learning complexity are also obtained for Rocchio's algorithm. For example, the authors prove that Rocchio's algorithm has a lower bound Omega((d über 2)log n) on its learning complexity over the Boolean vector space {0,1}**d.
  10. Silva, R.M.; Gonçalves, M.A.; Veloso, A.: ¬A Two-stage active learning method for learning to rank (2014) 0.01
    0.005783853 = product of:
      0.023135412 = sum of:
        0.023135412 = product of:
          0.09254165 = sum of:
            0.09254165 = weight(_text_:learning in 1184) [ClassicSimilarity], result of:
              0.09254165 = score(doc=1184,freq=12.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.6041714 = fieldWeight in 1184, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1184)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Learning to rank (L2R) algorithms use a labeled training set to generate a ranking model that can later be used to rank new query results. These training sets are costly and laborious to produce, requiring human annotators to assess the relevance or order of the documents in relation to a query. Active learning algorithms are able to reduce the labeling effort by selectively sampling an unlabeled set and choosing data instances that maximize a learning function's effectiveness. In this article, we propose a novel two-stage active learning method for L2R that combines and exploits interesting properties of its constituent parts, thus being effective and practical. In the first stage, an association rule active sampling algorithm is used to select a very small but effective initial training set. In the second stage, a query-by-committee strategy trained with the first-stage set is used to iteratively select more examples until a preset labeling budget is met or a target effectiveness is achieved. We test our method with various LETOR benchmarking data sets and compare it with several baselines to show that it achieves good results using only a small portion of the original training sets.
  11. Xu, B.; Lin, H.; Lin, Y.: Assessment of learning to rank methods for query expansion (2016) 0.01
    0.005783853 = product of:
      0.023135412 = sum of:
        0.023135412 = product of:
          0.09254165 = sum of:
            0.09254165 = weight(_text_:learning in 2929) [ClassicSimilarity], result of:
              0.09254165 = score(doc=2929,freq=12.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.6041714 = fieldWeight in 2929, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2929)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Pseudo relevance feedback, as an effective query expansion method, can significantly improve information retrieval performance. However, the method may negatively impact the retrieval performance when some irrelevant terms are used in the expanded query. Therefore, it is necessary to refine the expansion terms. Learning to rank methods have proven effective in information retrieval to solve ranking problems by ranking the most relevant documents at the top of the returned list, but few attempts have been made to employ learning to rank methods for term refinement in pseudo relevance feedback. This article proposes a novel framework to explore the feasibility of using learning to rank to optimize pseudo relevance feedback by means of reranking the candidate expansion terms. We investigate some learning approaches to choose the candidate terms and introduce some state-of-the-art learning to rank methods to refine the expansion terms. In addition, we propose two term labeling strategies and examine the usefulness of various term features to optimize the framework. Experimental results with three TREC collections show that our framework can effectively improve retrieval performance.
  12. Kwok, K.L.: ¬A network approach to probabilistic information retrieval (1995) 0.00
    0.004907762 = product of:
      0.019631049 = sum of:
        0.019631049 = product of:
          0.078524195 = sum of:
            0.078524195 = weight(_text_:learning in 5696) [ClassicSimilarity], result of:
              0.078524195 = score(doc=5696,freq=6.0), product of:
                0.15317118 = queryWeight, product of:
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.0343058 = queryNorm
                0.51265645 = fieldWeight in 5696, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.464877 = idf(docFreq=1382, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5696)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Shows how probabilistic information retrieval based on document components may be implemented as a feedforward (feedbackward) artificial neural network. The network supports adaptation of connection weights as well as the growing of new edges between queries and terms based on user relevance feedback data for training, and it reflects query modification and expansion in information retrieval. A learning rule is applied that can also be viewed as supporting sequential learning using a harmonic sequence learning rate. Experimental results with 4 standard small collections and a large Wall Street Journal collection show that small query expansion levels of about 30 terms can achieve most of the gains at the low-recall high-precision region, while larger expansion levels continue to provide gains at the high-recall low-precision region of a precision recall curve
  13. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.00
    0.0046479623 = product of:
      0.01859185 = sum of:
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.0371837 = score(doc=5108,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2007 18:30:22
  14. Faloutsos, C.: Signature files (1992) 0.00
    0.0046479623 = product of:
      0.01859185 = sum of:
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.0371837 = score(doc=3499,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    7. 5.1999 15:22:48
  15. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.00
    0.0046479623 = product of:
      0.01859185 = sum of:
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.0371837 = score(doc=1422,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2003 19:27:23
  16. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.00
    0.0046479623 = product of:
      0.01859185 = sum of:
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.0371837 = score(doc=1431,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2014 17:05:18
  17. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.00
    0.0046479623 = product of:
      0.01859185 = sum of:
        0.01859185 = product of:
          0.0371837 = sum of:
            0.0371837 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.0371837 = score(doc=1484,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    13. 9.2014 14:45:22
  18. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.00
    0.004108257 = product of:
      0.016433029 = sum of:
        0.016433029 = product of:
          0.032866057 = sum of:
            0.032866057 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.032866057 = score(doc=2591,freq=4.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  19. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.00
    0.004066967 = product of:
      0.016267868 = sum of:
        0.016267868 = product of:
          0.032535736 = sum of:
            0.032535736 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.032535736 = score(doc=1319,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:08:06
  20. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.00
    0.004066967 = product of:
      0.016267868 = sum of:
        0.016267868 = product of:
          0.032535736 = sum of:
            0.032535736 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
              0.032535736 = score(doc=3276,freq=2.0), product of:
                0.120133065 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0343058 = queryNorm
                0.2708308 = fieldWeight in 3276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3276)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 3.2005 16:23:22

Years

Languages

  • e 44
  • d 5
  • m 1
  • More… Less…

Types

  • a 46
  • m 3
  • r 1
  • s 1
  • More… Less…