Search (23 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.1022771 = sum of:
      0.08143642 = product of:
        0.24430925 = sum of:
          0.24430925 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24430925 = score(doc=562,freq=2.0), product of:
              0.4347 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05127382 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020840684 = product of:
        0.041681368 = sum of:
          0.041681368 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041681368 = score(doc=562,freq=2.0), product of:
              0.17955218 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05127382 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.020840684 = product of:
      0.041681368 = sum of:
        0.041681368 = product of:
          0.083362736 = sum of:
            0.083362736 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.083362736 = score(doc=1046,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  3. Díaz, I.; Ranilla, J.; Montañes, E.; Fernández, J.; Combarro, E.F.: Improving performance of text categorization by combining filtering and support vector machines (2004) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 2234) [ClassicSimilarity], result of:
              0.076811686 = score(doc=2234,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 2234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text Categorization is the process of assigning documents to a set of previously fixed categories. A lot of research is going an with the goal of automating this time-consuming task. Several different algorithms have been applied, and Support Vector Machines (SVM) have shown very good results. In this report, we try to prove that a previous filtering of the words used by SVM in the classification can improve the overall performance. This hypothesis is systematically tested with three different measures of word relevance, an two different corpus (one of them considered in three different splits), and with both local and global vocabularies. The results show that filtering significantly improves the recall of the method, and that also has the effect of significantly improving the overall performance.
  4. Aphinyanaphongs, Y.; Fu, L.D.; Li, Z.; Peskin, E.R.; Efstathiadis, E.; Aliferis, C.F.; Statnikov, A.: ¬A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization (2014) 0.02
    0.019202922 = product of:
      0.038405843 = sum of:
        0.038405843 = product of:
          0.076811686 = sum of:
            0.076811686 = weight(_text_:report in 1496) [ClassicSimilarity], result of:
              0.076811686 = score(doc=1496,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.31513304 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1496)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An important aspect to performing text categorization is selecting appropriate supervised classification and feature selection methods. A comprehensive benchmark is needed to inform best practices in this broad application field. Previous benchmarks have evaluated performance for a few supervised classification and feature selection methods and limited ways to optimize them. The present work updates prior benchmarks by increasing the number of classifiers and feature selection methods order of magnitude, including adding recently developed, state-of-the-art methods. Specifically, this study used 229 text categorization data sets/tasks, and evaluated 28 classification methods (both well-established and proprietary/commercial) and 19 feature selection methods according to 4 classification performance metrics. We report several key findings that will be helpful in establishing best methodological practices for text categorization.
  5. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.02
    0.01810469 = product of:
      0.03620938 = sum of:
        0.03620938 = product of:
          0.07241876 = sum of:
            0.07241876 = weight(_text_:report in 1669) [ClassicSimilarity], result of:
              0.07241876 = score(doc=1669,freq=4.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2971103 = fieldWeight in 1669, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1669)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  6. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017367238 = product of:
      0.034734476 = sum of:
        0.034734476 = product of:
          0.06946895 = sum of:
            0.06946895 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06946895 = score(doc=611,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  7. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017367238 = product of:
      0.034734476 = sum of:
        0.034734476 = product of:
          0.06946895 = sum of:
            0.06946895 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06946895 = score(doc=2748,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  8. Borodin, Y.; Polishchuk, V.; Mahmud, J.; Ramakrishnan, I.V.; Stent, A.: Live and learn from mistakes : a lightweight system for document classification (2013) 0.02
    0.016002435 = product of:
      0.03200487 = sum of:
        0.03200487 = product of:
          0.06400974 = sum of:
            0.06400974 = weight(_text_:report in 2722) [ClassicSimilarity], result of:
              0.06400974 = score(doc=2722,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.26261088 = fieldWeight in 2722, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2722)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We present a Life-Long Learning from Mistakes (3LM) algorithm for document classification, which could be used in various scenarios such as spam filtering, blog classification, and web resource categorization. We extend the ideas of online clustering and batch-mode centroid-based classification to online learning with negative feedback. The 3LM is a competitive learning algorithm, which avoids over-smoothing, characteristic of the centroid-based classifiers, by using a different class representative, which we call clusterhead. The clusterheads competing for vector-space dominance are drawn toward misclassified documents, eventually bringing the model to a "balanced state" for a fixed distribution of documents. Subsequently, the clusterheads oscillate between the misclassified documents, heuristically minimizing the rate of misclassifications, an NP-complete problem. Further, the 3LM algorithm prevents over-fitting by "leashing" the clusterheads to their respective centroids. A clusterhead provably converges if its class can be separated by a hyper-plane from all other classes. Lifelong learning with fixed learning rate allows 3LM to adapt to possibly changing distribution of the data and continually learn and unlearn document classes. We report on our experiments, which demonstrate high accuracy of document classification on Reuters21578, OHSUMED, and TREC07p-spam datasets. The 3LM algorithm did not show over-fitting, while consistently outperforming centroid-based, Naïve Bayes, C4.5, AdaBoost, kNN, and SVM whose accuracy had been reported on the same three corpora.
  9. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.02
    0.016002435 = product of:
      0.03200487 = sum of:
        0.03200487 = product of:
          0.06400974 = sum of:
            0.06400974 = weight(_text_:report in 3627) [ClassicSimilarity], result of:
              0.06400974 = score(doc=3627,freq=2.0), product of:
                0.24374367 = queryWeight, product of:
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.05127382 = queryNorm
                0.26261088 = fieldWeight in 3627, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.7537646 = idf(docFreq=1035, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3627)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  10. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.012157066 = product of:
      0.024314132 = sum of:
        0.024314132 = product of:
          0.048628263 = sum of:
            0.048628263 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.048628263 = score(doc=141,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  11. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.012157066 = product of:
      0.024314132 = sum of:
        0.024314132 = product of:
          0.048628263 = sum of:
            0.048628263 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.048628263 = score(doc=2338,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  12. Automatic classification research at OCLC (2002) 0.01
    0.012157066 = product of:
      0.024314132 = sum of:
        0.024314132 = product of:
          0.048628263 = sum of:
            0.048628263 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.048628263 = score(doc=1563,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  13. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012157066 = product of:
      0.024314132 = sum of:
        0.024314132 = product of:
          0.048628263 = sum of:
            0.048628263 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.048628263 = score(doc=1673,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  14. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.012157066 = product of:
      0.024314132 = sum of:
        0.024314132 = product of:
          0.048628263 = sum of:
            0.048628263 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.048628263 = score(doc=5273,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  15. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.012157066 = product of:
      0.024314132 = sum of:
        0.024314132 = product of:
          0.048628263 = sum of:
            0.048628263 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.048628263 = score(doc=2560,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  16. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.010420342 = product of:
      0.020840684 = sum of:
        0.020840684 = product of:
          0.041681368 = sum of:
            0.041681368 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.041681368 = score(doc=2760,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54
  17. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.01
    0.010420342 = product of:
      0.020840684 = sum of:
        0.020840684 = product of:
          0.041681368 = sum of:
            0.041681368 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.041681368 = score(doc=3051,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 19:51:28
  18. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.010420342 = product of:
      0.020840684 = sum of:
        0.020840684 = product of:
          0.041681368 = sum of:
            0.041681368 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.041681368 = score(doc=690,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 3.2013 13:22:36
  19. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.01
    0.010420342 = product of:
      0.020840684 = sum of:
        0.020840684 = product of:
          0.041681368 = sum of:
            0.041681368 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.041681368 = score(doc=2158,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 8.2015 19:22:04
  20. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.01
    0.008683619 = product of:
      0.017367238 = sum of:
        0.017367238 = product of:
          0.034734476 = sum of:
            0.034734476 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.034734476 = score(doc=2765,freq=2.0), product of:
                0.17955218 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05127382 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:14:43