Search (27 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.23
    0.22526461 = product of:
      0.3003528 = sum of:
        0.070573054 = product of:
          0.21171916 = sum of:
            0.21171916 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.21171916 = score(doc=562,freq=2.0), product of:
                0.37671238 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044434052 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.21171916 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.21171916 = score(doc=562,freq=2.0), product of:
            0.37671238 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.044434052 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.0180606 = product of:
          0.0361212 = sum of:
            0.0361212 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.0361212 = score(doc=562,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.04
    0.040079594 = product of:
      0.16031837 = sum of:
        0.16031837 = weight(_text_:engineering in 4088) [ClassicSimilarity], result of:
          0.16031837 = score(doc=4088,freq=4.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.671566 = fieldWeight in 4088, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.25 = coord(1/4)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  3. Golub, K.; Hamon, T.; Ardö, A.: Automated classification of textual documents based on a controlled vocabulary in engineering (2007) 0.04
    0.03681546 = product of:
      0.14726184 = sum of:
        0.14726184 = weight(_text_:engineering in 1461) [ClassicSimilarity], result of:
          0.14726184 = score(doc=1461,freq=6.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.6168728 = fieldWeight in 1461, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=1461)
      0.25 = coord(1/4)
    
    Abstract
    Automated subject classification has been a challenging research issue for many years now, receiving particular attention in the past decade due to rapid increase of digital documents. The most frequent approach to automated classification is machine learning. It, however, requires training documents and performs well on new documents only if these are similar enough to the former. We explore a string-matching algorithm based on a controlled vocabulary, which does not require training documents - instead it reuses the intellectual work put into creating the controlled vocabulary. Terms from the Engineering Information thesaurus and classification scheme were matched against title and abstract of engineering papers from the Compendex database. Simple string-matching was enhanced by several methods such as term weighting schemes and cut-offs, exclusion of certain terms, and en- richment of the controlled vocabulary with automatically extracted terms. The best results are 76% recall when the controlled vocabulary is enriched with new terms, and 79% precision when certain terms are excluded. Precision of individual classes is up to 98%. These results are comparable to state-of-the-art machine-learning algorithms.
  4. Koch, T.; Vizine-Goetz, D.: Automatic classification and content navigation support for Web services : DESIRE II cooperates with OCLC (1998) 0.04
    0.035069644 = product of:
      0.14027858 = sum of:
        0.14027858 = weight(_text_:engineering in 1568) [ClassicSimilarity], result of:
          0.14027858 = score(doc=1568,freq=4.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.58762026 = fieldWeight in 1568, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1568)
      0.25 = coord(1/4)
    
    Abstract
    Emerging standards in knowledge representation and organization are preparing the way for distributed vocabulary support in Internet search services. NetLab researchers are exploring several innovative solutions for searching and browsing in the subject-based Internet gateway, Electronic Engineering Library, Sweden (EELS). The implementation of the EELS service is described, specifically, the generation of the robot-gathered database 'All' engineering and the automated application of the Ei thesaurus and classification scheme. NetLab and OCLC researchers are collaborating to investigate advanced solutions to automated classification in the DESIRE II context. A plan for furthering the development of distributed vocabulary support in Internet search services is offered.
  5. Koch, T.; Ardö, A.; Noodén, L.: ¬The construction of a robot-generated subject index : DESIRE II D3.6a, Working Paper 1 (1999) 0.03
    0.030059695 = product of:
      0.12023878 = sum of:
        0.12023878 = weight(_text_:engineering in 1668) [ClassicSimilarity], result of:
          0.12023878 = score(doc=1668,freq=4.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.5036745 = fieldWeight in 1668, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=1668)
      0.25 = coord(1/4)
    
    Abstract
    This working paper describes the creation of a test database to carry out the automatic classification tasks of the DESIRE II work package D3.6a on. It is an improved version of NetLab's existing "All" Engineering database created after a comparative study of the outcome of two different approaches to collecting the documents. These two methods were selected from seven different general methodologies to build robot-generated subject indices, presented in this paper. We found a surprisingly low overlap between the Engineering link collections we used as seed pages for the robot and subsequently an even more surprisingly low overlap between the resources collected by the two different approaches. That inspite of using basically the same services to start the harvesting process from. A intellectual evaluation of the contents of both databases showed almost exactly the same percentage of relevant documents (77%), indicating that the main difference between those aproaches was the coverage of the resulting database.
  6. Golub, K.: Automated subject classification of textual Web pages, based on a controlled vocabulary : challenges and recommendations (2006) 0.03
    0.030059695 = product of:
      0.12023878 = sum of:
        0.12023878 = weight(_text_:engineering in 5897) [ClassicSimilarity], result of:
          0.12023878 = score(doc=5897,freq=4.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.5036745 = fieldWeight in 5897, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=5897)
      0.25 = coord(1/4)
    
    Abstract
    The primary objective of this study was to identify and address problems of applying a controlled vocabulary in automated subject classification of textual Web pages, in the area of engineering. Web pages have special characteristics such as structural information, but are at the same time rather heterogeneous. The classification approach used comprises string-to-string matching between words in a term list extracted from the Ei (Engineering Information) thesaurus and classification scheme, and words in the text to be classified. Based on a sample of 70 Web pages, a number of problems with the term list are identified. Reasons for those problems are discussed and improvements proposed. Methods for implementing the improvements are also specified, suggesting further research.
  7. Golub, K.; Lykke, M.: Automated classification of web pages in hierarchical browsing (2009) 0.03
    0.025049746 = product of:
      0.100198984 = sum of:
        0.100198984 = weight(_text_:engineering in 3614) [ClassicSimilarity], result of:
          0.100198984 = score(doc=3614,freq=4.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.41972876 = fieldWeight in 3614, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3614)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this study is twofold: to investigate whether it is meaningful to use the Engineering Index (Ei) classification scheme for browsing, and then, if proven useful, to investigate the performance of an automated classification algorithm based on the Ei classification scheme. Design/methodology/approach - A user study was conducted in which users solved four controlled searching tasks. The users browsed the Ei classification scheme in order to examine the suitability of the classification systems for browsing. The classification algorithm was evaluated by the users who judged the correctness of the automatically assigned classes. Findings - The study showed that the Ei classification scheme is suited for browsing. Automatically assigned classes were on average partly correct, with some classes working better than others. Success of browsing showed to be correlated and dependent on classification correctness. Research limitations/implications - Further research should address problems of disparate evaluations of one and the same web page. Additional reasons behind browsing failures in the Ei classification scheme also need further investigation. Practical implications - Improvements for browsing were identified: describing class captions and/or listing their subclasses from start; allowing for searching for words from class captions with synonym search (easily provided for Ei since the classes are mapped to thesauri terms); when searching for class captions, returning the hierarchical tree expanded around the class in which caption the search term is found. The need for improvements of classification schemes was also indicated. Originality/value - A user-based evaluation of automated subject classification in the context of browsing has not been conducted before; hence the study also presents new findings concerning methodology.
    Object
    Engineering Index Classification
  8. Prabowo, R.; Jackson, M.; Burden, P.; Knoell, H.-D.: Ontology-based automatic classification for the Web pages : design, implementation and evaluation (2002) 0.02
    0.021255413 = product of:
      0.08502165 = sum of:
        0.08502165 = weight(_text_:engineering in 3383) [ClassicSimilarity], result of:
          0.08502165 = score(doc=3383,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.35615164 = fieldWeight in 3383, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=3383)
      0.25 = coord(1/4)
    
    Content
    Beitrag bei: The Third International Conference on Web Information Systems Engineering (WISE'00) Dec., 12-14, 2002, Singapore, S.182.
  9. Sebastiani, F.: Machine learning in automated text categorization (2002) 0.02
    0.021255413 = product of:
      0.08502165 = sum of:
        0.08502165 = weight(_text_:engineering in 3389) [ClassicSimilarity], result of:
          0.08502165 = score(doc=3389,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.35615164 = fieldWeight in 3389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=3389)
      0.25 = coord(1/4)
    
    Abstract
    The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based an machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation.
  10. Sebastiani, F.: ¬A tutorial an automated text categorisation (1999) 0.02
    0.021255413 = product of:
      0.08502165 = sum of:
        0.08502165 = weight(_text_:engineering in 3390) [ClassicSimilarity], result of:
          0.08502165 = score(doc=3390,freq=2.0), product of:
            0.23872319 = queryWeight, product of:
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.044434052 = queryNorm
            0.35615164 = fieldWeight in 3390, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.372528 = idf(docFreq=557, maxDocs=44218)
              0.046875 = fieldNorm(doc=3390)
      0.25 = coord(1/4)
    
    Abstract
    The automated categorisation (or classification) of texts into topical categories has a long history, dating back at least to 1960. Until the late '80s, the dominant approach to the problem involved knowledge-engineering automatic categorisers, i.e. manually building a set of rules encoding expert knowledge an how to classify documents. In the '90s, with the booming production and availability of on-line documents, automated text categorisation has witnessed an increased and renewed interest. A newer paradigm based an machine learning has superseded the previous approach. Within this paradigm, a general inductive process automatically builds a classifier by "learning", from a set of previously classified documents, the characteristics of one or more categories; the advantages are a very good effectiveness, a considerable savings in terms of expert manpower, and domain independence. In this tutorial we look at the main approaches that have been taken towards automatic text categorisation within the general machine learning paradigm. Issues of document indexing, classifier construction, and classifier evaluation, will be touched upon.
  11. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.0090303 = product of:
      0.0361212 = sum of:
        0.0361212 = product of:
          0.0722424 = sum of:
            0.0722424 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.0722424 = score(doc=1046,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 14:17:22
  12. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.007525251 = product of:
      0.030101003 = sum of:
        0.030101003 = product of:
          0.060202006 = sum of:
            0.060202006 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.060202006 = score(doc=611,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 8.2009 12:54:24
  13. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.007525251 = product of:
      0.030101003 = sum of:
        0.030101003 = product of:
          0.060202006 = sum of:
            0.060202006 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.060202006 = score(doc=2748,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  14. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.0052676755 = product of:
      0.021070702 = sum of:
        0.021070702 = product of:
          0.042141404 = sum of:
            0.042141404 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.042141404 = score(doc=141,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Pages
    S.1-22
  15. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.0052676755 = product of:
      0.021070702 = sum of:
        0.021070702 = product of:
          0.042141404 = sum of:
            0.042141404 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.042141404 = score(doc=2338,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.1997 19:16:05
  16. Automatic classification research at OCLC (2002) 0.01
    0.0052676755 = product of:
      0.021070702 = sum of:
        0.021070702 = product of:
          0.042141404 = sum of:
            0.042141404 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.042141404 = score(doc=1563,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 5.2003 9:22:09
  17. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.0052676755 = product of:
      0.021070702 = sum of:
        0.021070702 = product of:
          0.042141404 = sum of:
            0.042141404 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.042141404 = score(doc=1673,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 8.1996 22:08:06
  18. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.0052676755 = product of:
      0.021070702 = sum of:
        0.021070702 = product of:
          0.042141404 = sum of:
            0.042141404 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.042141404 = score(doc=5273,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 16:24:52
  19. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.0052676755 = product of:
      0.021070702 = sum of:
        0.021070702 = product of:
          0.042141404 = sum of:
            0.042141404 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.042141404 = score(doc=2560,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 9.2008 18:31:54
  20. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.00451515 = product of:
      0.0180606 = sum of:
        0.0180606 = product of:
          0.0361212 = sum of:
            0.0361212 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.0361212 = score(doc=2760,freq=2.0), product of:
                0.15560047 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044434052 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2009 19:11:54