Search (26 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.43
    0.43478474 = product of:
      0.6086986 = sum of:
        0.059350993 = product of:
          0.17805298 = sum of:
            0.17805298 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.17805298 = score(doc=562,freq=2.0), product of:
                0.31681007 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.037368443 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.17805298 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.17805298 = score(doc=562,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.030377446 = score(doc=562,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.71428573 = coord(5/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Panyr, J.: Automatische thematische Textklassifikation und ihre Interpretation in der Dokumentengrobrecherche (1980) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 100) [ClassicSimilarity], result of:
          0.09482904 = score(doc=100,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 100, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=100)
      0.14285715 = coord(1/7)
    
  3. Yoon, Y.; Lee, G.G.: Efficient implementation of associative classifiers for document classification (2007) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 909) [ClassicSimilarity], result of:
          0.081282035 = score(doc=909,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=909)
      0.14285715 = coord(1/7)
    
    Abstract
    In practical text classification tasks, the ability to interpret the classification result is as important as the ability to classify exactly. Associative classifiers have many favorable characteristics such as rapid training, good classification accuracy, and excellent interpretation. However, associative classifiers also have some obstacles to overcome when they are applied in the area of text classification. The target text collection generally has a very high dimension, thus the training process might take a very long time. We propose a feature selection based on the mutual information between the word and class variables to reduce the space dimension of the associative classifiers. In addition, the training process of the associative classifier produces a huge amount of classification rules, which makes the prediction with a new document ineffective. We resolve this by introducing a new efficient method for storing and pruning classification rules. This method can also be used when predicting a test document. Experimental results using the 20-newsgroups dataset show many benefits of the associative classification in both training and predicting when applied to a real world problem.
  4. Piros, A.: Automatic interpretation of complex UDC numbers : towards support for library systems (2015) 0.01
    0.010947634 = product of:
      0.07663343 = sum of:
        0.07663343 = weight(_text_:interpretation in 2301) [ClassicSimilarity], result of:
          0.07663343 = score(doc=2301,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.35801122 = fieldWeight in 2301, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.03125 = fieldNorm(doc=2301)
      0.14285715 = coord(1/7)
    
    Abstract
    Analytico-synthetic and faceted classifications, such as Universal Decimal Classification (UDC) express content of documents with complex, pre-combined classification codes. Without classification authority control that would help manage and access structured notations, the use of UDC codes in searching and browsing is limited. Existing UDC parsing solutions are usually created for a particular database system or a specific task and are not widely applicable. The approach described in this paper provides a solution by which the analysis and interpretation of UDC notations would be stored into an intermediate format (in this case, in XML) by automatic means without any data or information loss. Due to its richness, the output file can be converted into different formats, such as standard mark-up and data exchange formats or simple lists of the recommended entry points of a UDC number. The program can also be used to create authority records containing complex UDC numbers which can be comprehensively analysed in order to be retrieved effectively. The Java program, as well as the corresponding schema definition it employs, is under continuous development. The current version of the interpreter software is now available online for testing purposes at the following web site: http://interpreter-eto.rhcloud.com. The future plan is to implement conversion methods for standard formats and to create standard online interfaces in order to make it possible to use the features of software as a service. This would result in the algorithm being able to be employed both in existing and future library systems to analyse UDC numbers without any significant programming effort.
  5. Schulze, U.: Erfahrungen bei der Anwendung automatischer Klassifizierungsverfahren zur Inhaltsanalyse einer Dokumentenmenge (1978) 0.01
    0.009578304 = product of:
      0.067048125 = sum of:
        0.067048125 = product of:
          0.13409625 = sum of:
            0.13409625 = weight(_text_:anwendung in 83) [ClassicSimilarity], result of:
              0.13409625 = score(doc=83,freq=6.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.741197 = fieldWeight in 83, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=83)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Die der Analyse zugrundeliegende Dokumentenmenge besteht aus 1.000 Entscheidungen des Bundesverfassungsgerichtes, deren volle Texte maschinenlesbar zur Verfügung standen. Vorgestellt werden die Anwendung eines iterativen Centroidverfahrens auf etwa 1.000 Wörter und die Anwendung eines Single-Linkage-Verfahrens in einer nicht-hierarchischen Variante, sowie die auf der Graphentheorie basierenden Verfahren und die verschiedener Ähnlichkeitsfunktionen und der Einfluß auf die Ergebnisse
  6. Panyr, J.: Automatische Klassifikation und Information Retrieval : Anwendung und Entwicklung komplexer Verfahren in Information-Retrieval-Systemen und ihre Evaluierung (1986) 0.01
    0.008295055 = product of:
      0.058065377 = sum of:
        0.058065377 = product of:
          0.116130754 = sum of:
            0.116130754 = weight(_text_:anwendung in 32) [ClassicSimilarity], result of:
              0.116130754 = score(doc=32,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.6418954 = fieldWeight in 32, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.09375 = fieldNorm(doc=32)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  7. Panyr, J.: Vektorraum-Modell und Clusteranalyse in Information-Retrieval-Systemen (1987) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 2322) [ClassicSimilarity], result of:
              0.0774205 = score(doc=2322,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 2322, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2322)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Ausgehend von theoretischen Indexierungsansätzen wird das klassische Vektorraum-Modell für automatische Indexierung (mit dem Trennschärfen-Modell) erläutert. Das Clustering in Information-Retrieval-Systemem wird als eine natürliche logische Folge aus diesem Modell aufgefaßt und in allen seinen Ausprägungen (d.h. als Dokumenten-, Term- oder Dokumenten- und Termklassifikation) behandelt. Anschließend werden die Suchstrategien in vorklassifizierten Dokumentenbeständen (Clustersuche) detailliert beschrieben. Zum Schluß wird noch die sinnvolle Anwendung der Clusteranalyse in Information-Retrieval-Systemen kurz diskutiert
  8. Bollmann, P.; Konrad, E.; Schneider, H.-J.; Zuse, H.: Anwendung automatischer Klassifikationsverfahren mit dem System FAKYR (1978) 0.01
    0.005530036 = product of:
      0.03871025 = sum of:
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 82) [ClassicSimilarity], result of:
              0.0774205 = score(doc=82,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
  9. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.00
    0.0043396354 = product of:
      0.030377446 = sum of:
        0.030377446 = product of:
          0.06075489 = sum of:
            0.06075489 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.06075489 = score(doc=1046,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 5.2003 14:17:22
  10. Puzicha, J.: Informationen finden! : Intelligente Suchmaschinentechnologie & automatische Kategorisierung (2007) 0.00
    0.0041475273 = product of:
      0.029032689 = sum of:
        0.029032689 = product of:
          0.058065377 = sum of:
            0.058065377 = weight(_text_:anwendung in 2817) [ClassicSimilarity], result of:
              0.058065377 = score(doc=2817,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.3209477 = fieldWeight in 2817, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2817)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Abstract
    Wie in diesem Text erläutert wurde, ist die Effektivität von Such- und Klassifizierungssystemen durch folgendes bestimmt: 1) den Arbeitsauftrag, 2) die Genauigkeit des Systems, 3) den zu erreichenden Automatisierungsgrad, 4) die Einfachheit der Integration in bereits vorhandene Systeme. Diese Kriterien gehen davon aus, dass jedes System, unabhängig von der Technologie, in der Lage ist, Grundvoraussetzungen des Produkts in Bezug auf Funktionalität, Skalierbarkeit und Input-Methode zu erfüllen. Diese Produkteigenschaften sind in der Recommind Produktliteratur genauer erläutert. Von diesen Fähigkeiten ausgehend sollte die vorhergehende Diskussion jedoch einige klare Trends aufgezeigt haben. Es ist nicht überraschend, dass jüngere Entwicklungen im Maschine Learning und anderen Bereichen der Informatik einen theoretischen Ausgangspunkt für die Entwicklung von Suchmaschinen- und Klassifizierungstechnologie haben. Besonders jüngste Fortschritte bei den statistischen Methoden (PLSA) und anderen mathematischen Werkzeugen (SVMs) haben eine Ergebnisqualität auf Durchbruchsniveau erreicht. Dazu kommt noch die Flexibilität in der Anwendung durch Selbsttraining und Kategorienerkennen von PLSA-Systemen, wie auch eine neue Generation von vorher unerreichten Produktivitätsverbesserungen.
  11. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.00
    0.003616363 = product of:
      0.02531454 = sum of:
        0.02531454 = product of:
          0.05062908 = sum of:
            0.05062908 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.05062908 = score(doc=611,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2009 12:54:24
  12. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.00
    0.003616363 = product of:
      0.02531454 = sum of:
        0.02531454 = product of:
          0.05062908 = sum of:
            0.05062908 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.05062908 = score(doc=2748,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 2.2016 18:25:22
  13. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.035440356 = score(doc=141,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Pages
    S.1-22
  14. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.035440356 = score(doc=2338,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.1997 19:16:05
  15. Automatic classification research at OCLC (2002) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.035440356 = score(doc=1563,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 5.2003 9:22:09
  16. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.035440356 = score(doc=1673,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    1. 8.1996 22:08:06
  17. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.035440356 = score(doc=5273,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 16:24:52
  18. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.00
    0.002531454 = product of:
      0.017720178 = sum of:
        0.017720178 = product of:
          0.035440356 = sum of:
            0.035440356 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.035440356 = score(doc=2560,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2008 18:31:54
  19. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.0021698177 = product of:
      0.015188723 = sum of:
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.030377446 = score(doc=2760,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2009 19:11:54
  20. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.00
    0.0021698177 = product of:
      0.015188723 = sum of:
        0.015188723 = product of:
          0.030377446 = sum of:
            0.030377446 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.030377446 = score(doc=3051,freq=2.0), product of:
                0.13085791 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037368443 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2009 19:51:28