Search (24 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.43
    0.43398947 = product of:
      0.60758525 = sum of:
        0.059242435 = product of:
          0.1777273 = sum of:
            0.1777273 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1777273 = score(doc=562,freq=2.0), product of:
                0.3162306 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03730009 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.1777273 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1777273 = score(doc=562,freq=2.0), product of:
            0.3162306 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03730009 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1777273 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1777273 = score(doc=562,freq=2.0), product of:
            0.3162306 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03730009 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1777273 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1777273 = score(doc=562,freq=2.0), product of:
            0.3162306 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03730009 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015160941 = product of:
          0.030321881 = sum of:
            0.030321881 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030321881 = score(doc=562,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Chae, G.; Park, J.; Park, J.; Yeo, W.S.; Shi, C.: Linking and clustering artworks using social tags : revitalizing crowd-sourced information on cultural collections (2016) 0.02
    0.015063827 = product of:
      0.105446786 = sum of:
        0.105446786 = weight(_text_:interpretations in 2852) [ClassicSimilarity], result of:
          0.105446786 = score(doc=2852,freq=2.0), product of:
            0.26682967 = queryWeight, product of:
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.03730009 = queryNorm
            0.3951839 = fieldWeight in 2852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.1535926 = idf(docFreq=93, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2852)
      0.14285715 = coord(1/7)
    
    Abstract
    Social tagging is one of the most popular methods for collecting crowd-sourced information in galleries, libraries, archives, and museums (GLAMs). However, when the number of social tags grows rapidly, using them becomes problematic and, as a result, they are often left as simply big data that cannot be used for practical purposes. To revitalize the use of this crowd-sourced information, we propose using social tags to link and cluster artworks based on an experimental study using an online collection at the Gyeonggi Museum of Modern Art (GMoMA). We view social tagging as a folksonomy, where artworks are classified by keywords of the crowd's various interpretations and one artwork can belong to several different categories simultaneously. To leverage this strength of social tags, we used a clustering method called "link communities" to detect overlapping communities in a network of artworks constructed by computing similarities between all artwork pairs. We used this framework to identify semantic relationships and clusters of similar artworks. By comparing the clustering results with curators' manual classification results, we demonstrated the potential of social tagging data for automatically clustering artworks in a way that reflects the dynamic perspectives of crowds.
  3. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.01
    0.009560784 = product of:
      0.06692549 = sum of:
        0.06692549 = product of:
          0.13385098 = sum of:
            0.13385098 = weight(_text_:anwendung in 83) [ClassicSimilarity], result of:
              0.13385098 = score(doc=83,freq=6.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.741197 = fieldWeight in 83, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=83)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Die der Analyse zugrundeliegende Dokumentenmenge besteht aus 1.000 Entscheidungen des Bundesverfassungsgerichtes, deren volle Texte maschinenlesbar zur Verfügung standen. Vorgestellt werden die Anwendung eines iterativen Centroidverfahrens auf etwa 1.000 Wörter und die Anwendung eines Single-Linkage-Verfahrens in einer nicht-hierarchischen Variante, sowie die auf der Graphentheorie basierenden Verfahren und die verschiedener Ähnlichkeitsfunktionen und der Einfluß auf die Ergebnisse
  4. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.01
    0.008279882 = product of:
      0.057959173 = sum of:
        0.057959173 = product of:
          0.115918346 = sum of:
            0.115918346 = weight(_text_:anwendung in 32) [ClassicSimilarity], result of:
              0.115918346 = score(doc=32,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.6418954 = fieldWeight in 32, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.09375 = fieldNorm(doc=32)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  5. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.01
    0.0055199214 = product of:
      0.03863945 = sum of:
        0.03863945 = product of:
          0.0772789 = sum of:
            0.0772789 = weight(_text_:anwendung in 2322) [ClassicSimilarity], result of:
              0.0772789 = score(doc=2322,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.42793027 = fieldWeight in 2322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Ausgehend von theoretischen Indexierungsansätzen wird das klassische Vektorraum-Modell für automatische Indexierung (mit dem Trennschärfen-Modell) erläutert. Das Clustering in Information-Retrieval-Systemem wird als eine natürliche logische Folge aus diesem Modell aufgefaßt und in allen seinen Ausprägungen (d.h. als Dokumenten-, Term- oder Dokumenten- und Termklassifikation) behandelt. Anschließend werden die Suchstrategien in vorklassifizierten Dokumentenbeständen (Clustersuche) detailliert beschrieben. Zum Schluß wird noch die sinnvolle Anwendung der Clusteranalyse in Information-Retrieval-Systemen kurz diskutiert
  6. Bollmann, P.; Konrad, E.; Schneider, H.-J.; Zuse, H.: Anwendung automatischer Klassifikationsverfahren mit dem System FAKYR (1978) 0.01
    0.0055199214 = product of:
      0.03863945 = sum of:
        0.03863945 = product of:
          0.0772789 = sum of:
            0.0772789 = weight(_text_:anwendung in 82) [ClassicSimilarity], result of:
              0.0772789 = score(doc=82,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.42793027 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  7. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.00
    0.0043316977 = product of:
      0.030321881 = sum of:
        0.030321881 = product of:
          0.060643762 = sum of:
            0.060643762 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.060643762 = score(doc=1046,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 5.2003 14:17:22
  8. Puzicha, J.: Informationen finden! : Intelligente Suchmaschinentechnologie & automatische Kategorisierung (2007) 0.00
    0.004139941 = product of:
      0.028979586 = sum of:
        0.028979586 = product of:
          0.057959173 = sum of:
            0.057959173 = weight(_text_:anwendung in 2817) [ClassicSimilarity], result of:
              0.057959173 = score(doc=2817,freq=2.0), product of:
                0.18058759 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.03730009 = queryNorm
                0.3209477 = fieldWeight in 2817, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2817)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Wie in diesem Text erläutert wurde, ist die Effektivität von Such- und Klassifizierungssystemen durch folgendes bestimmt: 1) den Arbeitsauftrag, 2) die Genauigkeit des Systems, 3) den zu erreichenden Automatisierungsgrad, 4) die Einfachheit der Integration in bereits vorhandene Systeme. Diese Kriterien gehen davon aus, dass jedes System, unabhängig von der Technologie, in der Lage ist, Grundvoraussetzungen des Produkts in Bezug auf Funktionalität, Skalierbarkeit und Input-Methode zu erfüllen. Diese Produkteigenschaften sind in der Recommind Produktliteratur genauer erläutert. Von diesen Fähigkeiten ausgehend sollte die vorhergehende Diskussion jedoch einige klare Trends aufgezeigt haben. Es ist nicht überraschend, dass jüngere Entwicklungen im Maschine Learning und anderen Bereichen der Informatik einen theoretischen Ausgangspunkt für die Entwicklung von Suchmaschinen- und Klassifizierungstechnologie haben. Besonders jüngste Fortschritte bei den statistischen Methoden (PLSA) und anderen mathematischen Werkzeugen (SVMs) haben eine Ergebnisqualität auf Durchbruchsniveau erreicht. Dazu kommt noch die Flexibilität in der Anwendung durch Selbsttraining und Kategorienerkennen von PLSA-Systemen, wie auch eine neue Generation von vorher unerreichten Produktivitätsverbesserungen.
  9. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.00
    0.003609748 = product of:
      0.025268236 = sum of:
        0.025268236 = product of:
          0.050536472 = sum of:
            0.050536472 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.050536472 = score(doc=611,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2009 12:54:24
  10. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.00
    0.003609748 = product of:
      0.025268236 = sum of:
        0.025268236 = product of:
          0.050536472 = sum of:
            0.050536472 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.050536472 = score(doc=2748,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 2.2016 18:25:22
  11. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.0025268234 = product of:
      0.017687764 = sum of:
        0.017687764 = product of:
          0.035375528 = sum of:
            0.035375528 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.035375528 = score(doc=141,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Pages
    S.1-22
  12. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.0025268234 = product of:
      0.017687764 = sum of:
        0.017687764 = product of:
          0.035375528 = sum of:
            0.035375528 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.035375528 = score(doc=2338,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.1997 19:16:05
  13. Automatic classification research at OCLC (2002) 0.00
    0.0025268234 = product of:
      0.017687764 = sum of:
        0.017687764 = product of:
          0.035375528 = sum of:
            0.035375528 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.035375528 = score(doc=1563,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 5.2003 9:22:09
  14. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    0.0025268234 = product of:
      0.017687764 = sum of:
        0.017687764 = product of:
          0.035375528 = sum of:
            0.035375528 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.035375528 = score(doc=1673,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 8.1996 22:08:06
  15. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    0.0025268234 = product of:
      0.017687764 = sum of:
        0.017687764 = product of:
          0.035375528 = sum of:
            0.035375528 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.035375528 = score(doc=5273,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 16:24:52
  16. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.00
    0.0025268234 = product of:
      0.017687764 = sum of:
        0.017687764 = product of:
          0.035375528 = sum of:
            0.035375528 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.035375528 = score(doc=2560,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2008 18:31:54
  17. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.0021658489 = product of:
      0.015160941 = sum of:
        0.015160941 = product of:
          0.030321881 = sum of:
            0.030321881 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.030321881 = score(doc=2760,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2009 19:11:54
  18. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.00
    0.0021658489 = product of:
      0.015160941 = sum of:
        0.015160941 = product of:
          0.030321881 = sum of:
            0.030321881 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.030321881 = score(doc=3051,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2009 19:51:28
  19. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.0021658489 = product of:
      0.015160941 = sum of:
        0.015160941 = product of:
          0.030321881 = sum of:
            0.030321881 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.030321881 = score(doc=690,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    23. 3.2013 13:22:36
  20. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.00
    0.0021658489 = product of:
      0.015160941 = sum of:
        0.015160941 = product of:
          0.030321881 = sum of:
            0.030321881 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.030321881 = score(doc=2158,freq=2.0), product of:
                0.13061856 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03730009 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    4. 8.2015 19:22:04