Search (24 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.17465216 = product of:
      0.2910869 = sum of:
        0.06839587 = product of:
          0.20518759 = sum of:
            0.20518759 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20518759 = score(doc=562,freq=2.0), product of:
                0.36509076 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.043063257 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20518759 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20518759 = score(doc=562,freq=2.0), product of:
            0.36509076 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.043063257 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.01750343 = product of:
          0.03500686 = sum of:
            0.03500686 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.03500686 = score(doc=562,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.03
    0.026955424 = product of:
      0.06738856 = sum of:
        0.052802365 = weight(_text_:education in 1107) [ClassicSimilarity], result of:
          0.052802365 = score(doc=1107,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.260262 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.014586192 = product of:
          0.029172383 = sum of:
            0.029172383 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.029172383 = score(doc=1107,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  3. Cui, H.; Heidorn, P.B.; Zhang, H.: ¬An approach to automatic classification of text for information retrieval (2002) 0.02
    0.021119175 = product of:
      0.10559587 = sum of:
        0.10559587 = weight(_text_:great in 174) [ClassicSimilarity], result of:
          0.10559587 = score(doc=174,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.43548337 = fieldWeight in 174, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=174)
      0.2 = coord(1/5)
    
    Abstract
    In this paper, we explore an approach to make better use of semi-structured documents in information retrieval in the domain of biology. Using machine learning techniques, we make those inherent structures explicit by XML markups. This marking up has great potentials in improving task performance in specimen identification and the usability of online flora and fauna.
  4. Xu, Y.; Bernard, A.: Knowledge organization through statistical computation : a new approach (2009) 0.02
    0.01810215 = product of:
      0.09051075 = sum of:
        0.09051075 = weight(_text_:great in 3252) [ClassicSimilarity], result of:
          0.09051075 = score(doc=3252,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.37327147 = fieldWeight in 3252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3252)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge organization (KO) is an interdisciplinary issue which includes some problems in knowledge classification such as how to classify newly emerged knowledge. With the great complexity and ambiguity of knowledge, it is becoming sometimes inefficient to classify knowledge by logical reasoning. This paper attempts to propose a statistical approach to knowledge organization in order to resolve the problems in classifying complex and mass knowledge. By integrating the classification process into a mathematical model, a knowledge classifier, based on the maximum entropy theory, is constructed and the experimental results show that the classification results acquired from the classifier are reliable. The approach proposed in this paper is quite formal and is not dependent on specific contexts, so it could easily be adapted to the use of knowledge classification in other domains within KO.
  5. Wang, J.: ¬An extensive study on automated Dewey Decimal Classification (2009) 0.02
    0.015085125 = product of:
      0.075425625 = sum of:
        0.075425625 = weight(_text_:great in 3172) [ClassicSimilarity], result of:
          0.075425625 = score(doc=3172,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.31105953 = fieldWeight in 3172, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3172)
      0.2 = coord(1/5)
    
    Abstract
    In this paper, we present a theoretical analysis and extensive experiments on the automated assignment of Dewey Decimal Classification (DDC) classes to bibliographic data with a supervised machine-learning approach. Library classification systems, such as the DDC, impose great obstacles on state-of-art text categorization (TC) technologies, including deep hierarchy, data sparseness, and skewed distribution. We first analyze statistically the document and category distributions over the DDC, and discuss the obstacles imposed by bibliographic corpora and library classification schemes on TC technology. To overcome these obstacles, we propose an innovative algorithm to reshape the DDC structure into a balanced virtual tree by balancing the category distribution and flattening the hierarchy. To improve the classification effectiveness to a level acceptable to real-world applications, we propose an interactive classification model that is able to predict a class of any depth within a limited number of user interactions. The experiments are conducted on a large bibliographic collection created by the Library of Congress within the science and technology domains over 10 years. With no more than three interactions, a classification accuracy of nearly 90% is achieved, thus providing a practical solution to the automatic bibliographic classification problem.
  6. Barthel, S.; Tönnies, S.; Balke, W.-T.: Large-scale experiments for mathematical document classification (2013) 0.02
    0.015085125 = product of:
      0.075425625 = sum of:
        0.075425625 = weight(_text_:great in 1056) [ClassicSimilarity], result of:
          0.075425625 = score(doc=1056,freq=2.0), product of:
            0.2424797 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.043063257 = queryNorm
            0.31105953 = fieldWeight in 1056, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1056)
      0.2 = coord(1/5)
    
    Abstract
    The ever increasing amount of digitally available information is curse and blessing at the same time. On the one hand, users have increasingly large amounts of information at their fingertips. On the other hand, the assessment and refinement of web search results becomes more and more tiresome and difficult for non-experts in a domain. Therefore, established digital libraries offer specialized collections with a certain degree of quality. This quality can largely be attributed to the great effort invested into semantic enrichment of the provided documents e.g. by annotating their documents with respect to a domain-specific taxonomy. This process is still done manually in many domains, e.g. chemistry CAS, medicine MeSH, or mathematics MSC. But due to the growing amount of data, this manual task gets more and more time consuming and expensive. The only solution for this problem seems to employ automated classification algorithms, but from evaluations done in previous research, conclusions to a real world scenario are difficult to make. We therefore conducted a large scale feasibility study on a real world data set from one of the biggest mathematical digital libraries, i.e. Zentralblatt MATH, with special focus on its practical applicability.
  7. Yilmaz, T.; Ozcan, R.; Altingovde, I.S.; Ulusoy, Ö.: Improving educational web search for question-like queries through subject classification (2019) 0.01
    0.014934765 = product of:
      0.074673824 = sum of:
        0.074673824 = weight(_text_:education in 5041) [ClassicSimilarity], result of:
          0.074673824 = score(doc=5041,freq=4.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.36806607 = fieldWeight in 5041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5041)
      0.2 = coord(1/5)
    
    Abstract
    Students use general web search engines as their primary source of research while trying to find answers to school-related questions. Although search engines are highly relevant for the general population, they may return results that are out of educational context. Another rising trend; social community question answering websites are the second choice for students who try to get answers from other peers online. We attempt discovering possible improvements in educational search by leveraging both of these information sources. For this purpose, we first implement a classifier for educational questions. This classifier is built by an ensemble method that employs several regular learning algorithms and retrieval based approaches that utilize external resources. We also build a query expander to facilitate classification. We further improve the classification using search engine results and obtain 83.5% accuracy. Although our work is entirely based on the Turkish language, the features could easily be mapped to other languages as well. In order to find out whether search engine ranking can be improved in the education domain using the classification model, we collect and label a set of query results retrieved from a general web search engine. We propose five ad-hoc methods to improve search ranking based on the idea that the query-document category relation is an indicator of relevance. We evaluate these methods for overall performance, varying query length and based on factoid and non-factoid queries. We show that some of the methods significantly improve the rankings in the education domain.
  8. Hoffmann, R.: Entwicklung einer benutzerunterstützten automatisierten Klassifikation von Web - Dokumenten : Untersuchung gegenwärtiger Methoden zur automatisierten Dokumentklassifikation und Implementierung eines Prototyps zum verbesserten Information Retrieval für das xFIND System (2002) 0.01
    0.008448379 = product of:
      0.042241894 = sum of:
        0.042241894 = weight(_text_:education in 4197) [ClassicSimilarity], result of:
          0.042241894 = score(doc=4197,freq=2.0), product of:
            0.20288157 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.043063257 = queryNorm
            0.2082096 = fieldWeight in 4197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.03125 = fieldNorm(doc=4197)
      0.2 = coord(1/5)
    
    Content
    Auch unter: http://www2.iicm.edu/cguetl/education/thesis/rhoff
  9. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.0070013716 = product of:
      0.03500686 = sum of:
        0.03500686 = product of:
          0.07001372 = sum of:
            0.07001372 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07001372 = score(doc=1046,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 5.2003 14:17:22
  10. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.0058344766 = product of:
      0.029172383 = sum of:
        0.029172383 = product of:
          0.058344766 = sum of:
            0.058344766 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.058344766 = score(doc=611,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.2009 12:54:24
  11. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.0058344766 = product of:
      0.029172383 = sum of:
        0.029172383 = product of:
          0.058344766 = sum of:
            0.058344766 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.058344766 = score(doc=2748,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 2.2016 18:25:22
  12. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.004084134 = product of:
      0.020420669 = sum of:
        0.020420669 = product of:
          0.040841337 = sum of:
            0.040841337 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.040841337 = score(doc=141,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Pages
    S.1-22
  13. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.004084134 = product of:
      0.020420669 = sum of:
        0.020420669 = product of:
          0.040841337 = sum of:
            0.040841337 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.040841337 = score(doc=2338,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.1997 19:16:05
  14. Automatic classification research at OCLC (2002) 0.00
    0.004084134 = product of:
      0.020420669 = sum of:
        0.020420669 = product of:
          0.040841337 = sum of:
            0.040841337 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.040841337 = score(doc=1563,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    5. 5.2003 9:22:09
  15. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    0.004084134 = product of:
      0.020420669 = sum of:
        0.020420669 = product of:
          0.040841337 = sum of:
            0.040841337 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.040841337 = score(doc=1673,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 8.1996 22:08:06
  16. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    0.004084134 = product of:
      0.020420669 = sum of:
        0.020420669 = product of:
          0.040841337 = sum of:
            0.040841337 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.040841337 = score(doc=5273,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 7.2006 16:24:52
  17. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.00
    0.004084134 = product of:
      0.020420669 = sum of:
        0.020420669 = product of:
          0.040841337 = sum of:
            0.040841337 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.040841337 = score(doc=2560,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 9.2008 18:31:54
  18. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.0035006858 = product of:
      0.01750343 = sum of:
        0.01750343 = product of:
          0.03500686 = sum of:
            0.03500686 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.03500686 = score(doc=2760,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2009 19:11:54
  19. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.00
    0.0035006858 = product of:
      0.01750343 = sum of:
        0.01750343 = product of:
          0.03500686 = sum of:
            0.03500686 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.03500686 = score(doc=3051,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 8.2009 19:51:28
  20. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.0035006858 = product of:
      0.01750343 = sum of:
        0.01750343 = product of:
          0.03500686 = sum of:
            0.03500686 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.03500686 = score(doc=690,freq=2.0), product of:
                0.15080018 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043063257 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    23. 3.2013 13:22:36