Search (37 results, page 1 of 2)

  • × theme_ss:"Retrievalalgorithmen"
  1. Sparck Jones, K.: ¬A statistical interpretation of term specifity and its application in retrieval (1972) 0.03
    0.030964585 = product of:
      0.21675208 = sum of:
        0.21675208 = weight(_text_:interpretation in 5187) [ClassicSimilarity], result of:
          0.21675208 = score(doc=5187,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            1.0126086 = fieldWeight in 5187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.125 = fieldNorm(doc=5187)
      0.14285715 = coord(1/7)
    
  2. Fuhr, N.: Rankingexperimente mit gewichteter Indexierung (1986) 0.03
    0.02526938 = product of:
      0.17688565 = sum of:
        0.17688565 = sum of:
          0.116130754 = weight(_text_:anwendung in 2051) [ClassicSimilarity], result of:
            0.116130754 = score(doc=2051,freq=2.0), product of:
              0.1809185 = queryWeight, product of:
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.037368443 = queryNorm
              0.6418954 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8414783 = idf(docFreq=948, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
          0.06075489 = weight(_text_:22 in 2051) [ClassicSimilarity], result of:
            0.06075489 = score(doc=2051,freq=2.0), product of:
              0.13085791 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.037368443 = queryNorm
              0.46428138 = fieldWeight in 2051, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=2051)
      0.14285715 = coord(1/7)
    
    Date
    14. 6.2015 22:12:56
    Source
    Automatische Indexierung zwischen Forschung und Anwendung, Hrsg.: G. Lustig
  3. Fox, E.; Betrabet, S.; Koushik, M.; Lee, W.: Extended Boolean models (1992) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 3512) [ClassicSimilarity], result of:
          0.11495014 = score(doc=3512,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 3512, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3512)
      0.14285715 = coord(1/7)
    
    Abstract
    The classical interpretation of Boolean operators in an information retrieval system is in general too strict. A standard Boolean query rarely comes close to retrieving all and only those documents which are relevant to a query. Many models have been proposed with the aim of softening the interpretation of the Boolean operators in order to improve the precision and recall of the search results. This chapter discusses 3 such models: the Mixed Min and Max (MMM), the Paice, and the P-noem models. The MMM and Paice models are essentially variations of the classical fuzzy-set model, while the P-norm scheme is a distance-based approach. Our experimental results indicate that each of the above models provide better performance than the classical Boolean model in terms of retrieval effectiveness
  4. Dominich, S.; Skrop, A.: PageRank and interaction information retrieval (2005) 0.02
    0.01642145 = product of:
      0.11495014 = sum of:
        0.11495014 = weight(_text_:interpretation in 3268) [ClassicSimilarity], result of:
          0.11495014 = score(doc=3268,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5370168 = fieldWeight in 3268, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3268)
      0.14285715 = coord(1/7)
    
    Abstract
    The PageRank method is used by the Google Web search engine to compute the importance of Web pages. Two different views have been developed for the Interpretation of the PageRank method and values: (a) stochastic (random surfer): the PageRank values can be conceived as the steady-state distribution of a Markov chain, and (b) algebraic: the PageRank values form the eigenvector corresponding to eigenvalue 1 of the Web link matrix. The Interaction Information Retrieval (1**2 R) method is a nonclassical information retrieval paradigm, which represents a connectionist approach based an dynamic systems. In the present paper, a different Interpretation of PageRank is proposed, namely, a dynamic systems viewpoint, by showing that the PageRank method can be formally interpreted as a particular case of the Interaction Information Retrieval method; and thus, the PageRank values may be interpreted as neutral equilibrium points of the Web.
  5. Sparck Jones, K.: ¬A statistical interpretation of term specificity and its application in retrieval (2004) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 4420) [ClassicSimilarity], result of:
          0.09482904 = score(doc=4420,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 4420, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4420)
      0.14285715 = coord(1/7)
    
  6. Maron, M.E.; Kuhns, I.L.: On relevance, probabilistic indexing and information retrieval (1960) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1928) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1928,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1928, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1928)
      0.14285715 = coord(1/7)
    
    Abstract
    Reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called 'Probabilistic indexing' allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the 'relevance number') for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance. The paper goes on to show that whereas in a conventional library system the cross-referencing ('see' and 'see also') is based soley on the 'semantic closeness' between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can eleborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected. Finally, the paper suggest an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user
  7. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.005786181 = product of:
      0.040503263 = sum of:
        0.040503263 = product of:
          0.08100653 = sum of:
            0.08100653 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.08100653 = score(doc=402,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  8. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.005062908 = product of:
      0.035440356 = sum of:
        0.035440356 = product of:
          0.07088071 = sum of:
            0.07088071 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.07088071 = score(doc=2134,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    30. 3.2001 13:32:22
  9. Back, J.: ¬An evaluation of relevancy ranking techniques used by Internet search engines (2000) 0.01
    0.005062908 = product of:
      0.035440356 = sum of:
        0.035440356 = product of:
          0.07088071 = sum of:
            0.07088071 = weight(_text_:22 in 3445) [ClassicSimilarity], result of:
              0.07088071 = score(doc=3445,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.5416616 = fieldWeight in 3445, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    25. 8.2005 17:42:22
  10. Fuhr, N.: Ranking-Experimente mit gewichteter Indexierung (1986) 0.00
    0.0043396354 = product of:
      0.030377446 = sum of:
        0.030377446 = product of:
          0.06075489 = sum of:
            0.06075489 = weight(_text_:22 in 58) [ClassicSimilarity], result of:
              0.06075489 = score(doc=58,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46428138 = fieldWeight in 58, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=58)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    14. 6.2015 22:12:44
  11. Marcus, S.: Textvergleich mit mehreren Mustern (2005) 0.00
    0.0039103264 = product of:
      0.027372282 = sum of:
        0.027372282 = product of:
          0.054744564 = sum of:
            0.054744564 = weight(_text_:anwendung in 862) [ClassicSimilarity], result of:
              0.054744564 = score(doc=862,freq=4.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.3025924 = fieldWeight in 862, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03125 = fieldNorm(doc=862)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Das Gebiet des Pattern-Matching besitzt in vielen wissenschaftlichen Bereichen eine hohe Relevanz. Aufgrund unterschiedlicher Einsatzgebiete sind auch Umsetzung und Anwendung des Pattern-Matching sehr verschieden. Die allen Anwendungen des Pattern-Matching inhärente Aufgabe besteht darin, in einer Vielzahl von Eingabedaten bestimmte Muster wieder zu erkennen. Dies ist auch der deutschen Bezeichnung Mustererkennung zu entnehmen. In der Medizin findet Pattern-Matching zum Beispiel bei der Untersuchung von Chromosomensträngen auf bestimmte Folgen von Chromosomen Verwendung. Auf dem Gebiet der Bildverarbeitung können mit Hilfe des Pattern-Matching ganze Bilder verglichen oder einzelne Bildpunkte betrachtet werden, die durch ein Muster identifizierbar sind. Ein weiteres Einsatzgebiet des Pattern-Matching ist das Information-Retrieval, bei dem in gespeicherten Daten nach relevanten Informationen gesucht wird. Die Relevanz der zu suchenden Daten wird auch hier anhand eines Musters, zum Beispiel einem bestimmten Schlagwort, beurteilt. Ein vergleichbares Verfahren findet auch im Internet Anwendung. Internet-Benutzer, die mittels einer Suchmaschine nach bedeutsamen Informationen suchen, erhalten diese durch den Einsatz eines Pattern-Matching-Automaten. Die in diesem Zusammenhang an den Pattern-Matching-Automaten gestellten Anforderungen variieren mit der Suchanfrage, die an eine Suchmaschine gestellt wird. Eine solche Suchanfrage kann im einfachsten Fall aus genau einem Schlüsselwort bestehen. Im komplexeren Fall enthält die Anfrage mehrere Schlüsselwörter. Dabei muss für eine erfolgreiche Suche eine Konkatenation der in der Anfrage enthaltenen Wörter erfolgen. Zu Beginn dieser Arbeit wird in Kapitel 2 eine umfassende Einführung in die Thematik des Textvergleichs gegeben, wobei die Definition einiger grundlegender Begriffe vorgenommen wird. Anschließend werden in Kapitel 3 Verfahren zum Textvergleich mit mehreren Mustern vorgestellt. Dabei wird zunächst ein einfaches Vorgehen erläutert, um einen Einsteig in das Thema des Textvergleichs mit mehreren Mustern zu erleichtern. Danach wird eine komplexe Methode des Textvergleichs vorgestellt und anhand von Beispielen verdeutlicht.
  12. Fuhr, N.: Modelle im Information Retrieval (2023) 0.00
    0.0034562727 = product of:
      0.024193907 = sum of:
        0.024193907 = product of:
          0.048387814 = sum of:
            0.048387814 = weight(_text_:anwendung in 800) [ClassicSimilarity], result of:
              0.048387814 = score(doc=800,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2674564 = fieldWeight in 800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=800)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Information-Retrieval-Modelle -(IR-Modelle) spezifizieren, wie zu einer gegebenen Anfrage die Antwortdokumente aus einer Dokumentenkollektion bestimmt werden. Ausgangsbasis jedes Modells sind dabei zunächst bestimmte Annahmen über die Wissensrepräsentation (s. Teil B Methoden und Systeme der Inhaltserschließung) von Fragen und Dokumenten. Hier bezeichnen wir die Elemente dieser Repräsentationen als Terme, wobei es aus der Sicht des Modells egal ist, wie diese Terme aus dem Dokument (und analog aus der von Benutzenden eingegebenen Anfrage) abgeleitet werden: Bei Texten werden hierzu häufig computerlinguistische Methoden eingesetzt, aber auch komplexere automatische oder manuelle Erschließungsverfahren können zur Anwendung kommen. Repräsentationen besitzen ferner eine bestimmte Struktur. Ein Dokument wird meist als Menge oder Multimenge von Termen aufgefasst, wobei im zweiten Fall das Mehrfachvorkommen berücksichtigt wird. Diese Dokumentrepräsentation wird wiederum auf eine sogenannte Dokumentbeschreibung abgebildet, in der die einzelnen Terme gewichtet sein können. Im Folgenden unterscheiden wir nur zwischen ungewichteter (Gewicht eines Terms ist entweder 0 oder 1) und gewichteter Indexierung (das Gewicht ist eine nichtnegative reelle Zahl). Analog dazu gibt es eine Fragerepräsentation; legt man eine natürlichsprachige Anfrage zugrunde, so kann man die o. g. Verfahren für Dokumenttexte anwenden. Alternativ werden auch grafische oder formale Anfragesprachen verwendet, wobei aus Sicht der Modelle insbesondere deren logische Struktur (etwa beim Booleschen Retrieval) relevant ist. Die Fragerepräsentation wird dann in eine Fragebeschreibung überführt.
  13. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.00
    0.0028930905 = product of:
      0.020251632 = sum of:
        0.020251632 = product of:
          0.040503263 = sum of:
            0.040503263 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.040503263 = score(doc=5108,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    20. 1.2007 18:30:22
  14. Faloutsos, C.: Signature files (1992) 0.00
    0.0028930905 = product of:
      0.020251632 = sum of:
        0.020251632 = product of:
          0.040503263 = sum of:
            0.040503263 = weight(_text_:22 in 3499) [ClassicSimilarity], result of:
              0.040503263 = score(doc=3499,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.30952093 = fieldWeight in 3499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3499)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    7. 5.1999 15:22:48
  15. Losada, D.E.; Barreiro, A.: Emebedding term similarity and inverse document frequency into a logical model of information retrieval (2003) 0.00
    0.0028930905 = product of:
      0.020251632 = sum of:
        0.020251632 = product of:
          0.040503263 = sum of:
            0.040503263 = weight(_text_:22 in 1422) [ClassicSimilarity], result of:
              0.040503263 = score(doc=1422,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.30952093 = fieldWeight in 1422, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1422)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2003 19:27:23
  16. Bornmann, L.; Mutz, R.: From P100 to P100' : a new citation-rank approach (2014) 0.00
    0.0028930905 = product of:
      0.020251632 = sum of:
        0.020251632 = product of:
          0.040503263 = sum of:
            0.040503263 = weight(_text_:22 in 1431) [ClassicSimilarity], result of:
              0.040503263 = score(doc=1431,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.30952093 = fieldWeight in 1431, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1431)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2014 17:05:18
  17. Tober, M.; Hennig, L.; Furch, D.: SEO Ranking-Faktoren und Rang-Korrelationen 2014 : Google Deutschland (2014) 0.00
    0.0028930905 = product of:
      0.020251632 = sum of:
        0.020251632 = product of:
          0.040503263 = sum of:
            0.040503263 = weight(_text_:22 in 1484) [ClassicSimilarity], result of:
              0.040503263 = score(doc=1484,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.30952093 = fieldWeight in 1484, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1484)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    13. 9.2014 14:45:22
  18. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.00
    0.0025571547 = product of:
      0.017900083 = sum of:
        0.017900083 = product of:
          0.035800166 = sum of:
            0.035800166 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.035800166 = score(doc=2591,freq=4.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  19. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
              0.035440356 = score(doc=1319,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 1319, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 8.1996 22:08:06
  20. Kanaeva, Z.: Ranking: Google und CiteSeer (2005) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 3276) [ClassicSimilarity], result of:
              0.035440356 = score(doc=3276,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 3276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3276)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    20. 3.2005 16:23:22

Languages

  • e 30
  • d 6
  • m 1
  • More… Less…

Types

  • a 33
  • m 2
  • r 1
  • s 1
  • x 1
  • More… Less…