Search (23 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.18
    0.17518131 = product of:
      0.29196885 = sum of:
        0.0686031 = product of:
          0.20580928 = sum of:
            0.20580928 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20580928 = score(doc=562,freq=2.0), product of:
                0.36619693 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04319373 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20580928 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20580928 = score(doc=562,freq=2.0), product of:
            0.36619693 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04319373 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.017556462 = product of:
          0.035112925 = sum of:
            0.035112925 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.035112925 = score(doc=562,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Golub, K.: Automated subject classification of textual documents in the context of Web-based hierarchical browsing (2011) 0.02
    0.015829664 = product of:
      0.079148315 = sum of:
        0.079148315 = weight(_text_:computers in 4558) [ClassicSimilarity], result of:
          0.079148315 = score(doc=4558,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.34852874 = fieldWeight in 4558, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.046875 = fieldNorm(doc=4558)
      0.2 = coord(1/5)
    
    Abstract
    While automated methods for information organization have been around for several decades now, exponential growth of the World Wide Web has put them into the forefront of research in different communities, within which several approaches can be identified: 1) machine learning (algorithms that allow computers to improve their performance based on learning from pre-existing data); 2) document clustering (algorithms for unsupervised document organization and automated topic extraction); and 3) string matching (algorithms that match given strings within larger text). Here the aim was to automatically organize textual documents into hierarchical structures for subject browsing. The string-matching approach was tested using a controlled vocabulary (containing pre-selected and pre-defined authorized terms, each corresponding to only one concept). The results imply that an appropriate controlled vocabulary, with a sufficient number of entry terms designating classes, could in itself be a solution for automated classification. Then, if the same controlled vocabulary had an appropriat hierarchical structure, it would at the same time provide a good browsing structure for the collection of automatically classified documents.
  3. Automatische Klassifikation und Extraktion in Documentum (2005) 0.01
    0.013191386 = product of:
      0.06595693 = sum of:
        0.06595693 = weight(_text_:computers in 3974) [ClassicSimilarity], result of:
          0.06595693 = score(doc=3974,freq=2.0), product of:
            0.22709264 = queryWeight, product of:
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.04319373 = queryNorm
            0.29044062 = fieldWeight in 3974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.257537 = idf(docFreq=625, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3974)
      0.2 = coord(1/5)
    
    Content
    "LCI Comprend ist ab sofort als integriertes Modul für EMCs Content Management System Documentum verfügbar. LCI (Learning Computers International GmbH) hat mit Unterstützung von neeb & partner diese Technologie zur Dokumentenautomation transparent in Documentum integriert. Dies ist die erste bekannte Lösung für automatische, lernende Klassifikation und Extraktion, die direkt auf dem Documentum Datenbestand arbeitet und ohne zusätzliche externe Steuerung auskommt. Die LCI Information Capture Services (ICS) dienen dazu, jegliche Art von Dokument zu klassifizieren und Information daraus zu extrahieren. Das Dokument kann strukturiert, halbstrukturiert oder unstrukturiert sein. Somit können beispielsweise gescannte Formulare genauso verarbeitet werden wie Rechnungen oder E-Mails. Die Extraktions- und Klassifikationsvorschriften und die zu lernenden Beispieldokumente werden einfach interaktiv zusammengestellt und als XML-Struktur gespeichert. Zur Laufzeit wird das Projekt angewendet, um unbekannte Dokumente aufgrund von Regeln und gelernten Beispielen automatisch zu indexieren. Dokumente können damit entweder innerhalb von Documentum oder während des Imports verarbeitet werden. Der neue Server erlaubt das Einlesen von Dateien aus dem Dateisystem oder direkt von POPS-Konten, die Analyse der Dokumente und die automatische Erzeugung von Indexwerten bei der Speicherung in einer Documentum Ablageumgebung. Diese Indexwerte, die durch inhaltsbasierte, auch mehrthematische Klassifikation oder durch Extraktion gewonnen wurden, werden als vordefinierte Attribute mit dem Documentum-Objekt abgelegt. Handelt es sich um ein gescanntes Dokument oder ein Fax, wird automatisch die integrierte Volltext-Texterkennung durchgeführt."
  4. Montesi, M.; Navarrete, T.: Classifying web genres in context : A case study documenting the web genres used by a software engineer (2008) 0.01
    0.008763309 = product of:
      0.043816544 = sum of:
        0.043816544 = product of:
          0.08763309 = sum of:
            0.08763309 = weight(_text_:history in 2100) [ClassicSimilarity], result of:
              0.08763309 = score(doc=2100,freq=4.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.43612334 = fieldWeight in 2100, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2100)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This case study analyzes the Internet-based resources that a software engineer uses in his daily work. Methodologically, we studied the web browser history of the participant, classifying all the web pages he had seen over a period of 12 days into web genres. We interviewed him before and after the analysis of the web browser history. In the first interview, he spoke about his general information behavior; in the second, he commented on each web genre, explaining why and how he used them. As a result, three approaches allow us to describe the set of 23 web genres obtained: (a) the purposes they serve for the participant; (b) the role they play in the various work and search phases; (c) and the way they are used in combination with each other. Further observations concern the way the participant assesses quality of web-based resources, and his information behavior as a software engineer.
  5. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.0070225853 = product of:
      0.035112925 = sum of:
        0.035112925 = product of:
          0.07022585 = sum of:
            0.07022585 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07022585 = score(doc=1046,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 5.2003 14:17:22
  6. Mukhopadhyay, S.; Peng, S.; Raje, R.; Palakal, M.; Mostafa, J.: Multi-agent information classification using dynamic acquaintance lists (2003) 0.01
    0.0061965953 = product of:
      0.030982977 = sum of:
        0.030982977 = product of:
          0.061965954 = sum of:
            0.061965954 = weight(_text_:history in 1755) [ClassicSimilarity], result of:
              0.061965954 = score(doc=1755,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.3083858 = fieldWeight in 1755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1755)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    There has been considerable interest in recent years in providing automated information services, such as information classification, by means of a society of collaborative agents. These agents augment each other's knowledge structures (e.g., the vocabularies) and assist each other in providing efficient information services to a human user. However, when the number of agents present in the society increases, exhaustive communication and collaboration among agents result in a [arge communication overhead and increased delays in response time. This paper introduces a method to achieve selective interaction with a relatively small number of potentially useful agents, based an simple agent modeling and acquaintance lists. The key idea presented here is that the acquaintance list of an agent, representing a small number of other agents to be collaborated with, is dynamically adjusted. The best acquaintances are automatically discovered using a learning algorithm, based an the past history of collaboration. Experimental results are presented to demonstrate that such dynamically learned acquaintance lists can lead to high quality of classification, while significantly reducing the delay in response time.
  7. Sebastiani, F.: ¬A tutorial an automated text categorisation (1999) 0.01
    0.0061965953 = product of:
      0.030982977 = sum of:
        0.030982977 = product of:
          0.061965954 = sum of:
            0.061965954 = weight(_text_:history in 3390) [ClassicSimilarity], result of:
              0.061965954 = score(doc=3390,freq=2.0), product of:
                0.20093648 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.04319373 = queryNorm
                0.3083858 = fieldWeight in 3390, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3390)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The automated categorisation (or classification) of texts into topical categories has a long history, dating back at least to 1960. Until the late '80s, the dominant approach to the problem involved knowledge-engineering automatic categorisers, i.e. manually building a set of rules encoding expert knowledge an how to classify documents. In the '90s, with the booming production and availability of on-line documents, automated text categorisation has witnessed an increased and renewed interest. A newer paradigm based an machine learning has superseded the previous approach. Within this paradigm, a general inductive process automatically builds a classifier by "learning", from a set of previously classified documents, the characteristics of one or more categories; the advantages are a very good effectiveness, a considerable savings in terms of expert manpower, and domain independence. In this tutorial we look at the main approaches that have been taken towards automatic text categorisation within the general machine learning paradigm. Issues of document indexing, classifier construction, and classifier evaluation, will be touched upon.
  8. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.0058521545 = product of:
      0.029260771 = sum of:
        0.029260771 = product of:
          0.058521543 = sum of:
            0.058521543 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.058521543 = score(doc=611,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.2009 12:54:24
  9. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.0058521545 = product of:
      0.029260771 = sum of:
        0.029260771 = product of:
          0.058521543 = sum of:
            0.058521543 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.058521543 = score(doc=2748,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 2.2016 18:25:22
  10. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.0040965076 = product of:
      0.020482538 = sum of:
        0.020482538 = product of:
          0.040965077 = sum of:
            0.040965077 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.040965077 = score(doc=141,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Pages
    S.1-22
  11. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.0040965076 = product of:
      0.020482538 = sum of:
        0.020482538 = product of:
          0.040965077 = sum of:
            0.040965077 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.040965077 = score(doc=2338,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.1997 19:16:05
  12. Automatic classification research at OCLC (2002) 0.00
    0.0040965076 = product of:
      0.020482538 = sum of:
        0.020482538 = product of:
          0.040965077 = sum of:
            0.040965077 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.040965077 = score(doc=1563,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 5.2003 9:22:09
  13. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    0.0040965076 = product of:
      0.020482538 = sum of:
        0.020482538 = product of:
          0.040965077 = sum of:
            0.040965077 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.040965077 = score(doc=1673,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 8.1996 22:08:06
  14. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    0.0040965076 = product of:
      0.020482538 = sum of:
        0.020482538 = product of:
          0.040965077 = sum of:
            0.040965077 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.040965077 = score(doc=5273,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2006 16:24:52
  15. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.00
    0.0040965076 = product of:
      0.020482538 = sum of:
        0.020482538 = product of:
          0.040965077 = sum of:
            0.040965077 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.040965077 = score(doc=2560,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2008 18:31:54
  16. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.0035112926 = product of:
      0.017556462 = sum of:
        0.017556462 = product of:
          0.035112925 = sum of:
            0.035112925 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.035112925 = score(doc=2760,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2009 19:11:54
  17. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.00
    0.0035112926 = product of:
      0.017556462 = sum of:
        0.017556462 = product of:
          0.035112925 = sum of:
            0.035112925 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.035112925 = score(doc=3051,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.2009 19:51:28
  18. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.0035112926 = product of:
      0.017556462 = sum of:
        0.017556462 = product of:
          0.035112925 = sum of:
            0.035112925 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.035112925 = score(doc=690,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    23. 3.2013 13:22:36
  19. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.00
    0.0035112926 = product of:
      0.017556462 = sum of:
        0.017556462 = product of:
          0.035112925 = sum of:
            0.035112925 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.035112925 = score(doc=2158,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    4. 8.2015 19:22:04
  20. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.00
    0.0029260772 = product of:
      0.014630386 = sum of:
        0.014630386 = product of:
          0.029260771 = sum of:
            0.029260771 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.029260771 = score(doc=2765,freq=2.0), product of:
                0.15125708 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04319373 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2009 19:14:43