Search (28 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.102998026 = sum of:
      0.08201044 = product of:
        0.24603131 = sum of:
          0.24603131 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24603131 = score(doc=562,freq=2.0), product of:
              0.43776408 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05163523 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.020987583 = product of:
        0.041975167 = sum of:
          0.041975167 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041975167 = score(doc=562,freq=2.0), product of:
              0.18081778 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05163523 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Zhang, X: Rough set theory based automatic text categorization (2005) 0.03
    0.034173757 = product of:
      0.06834751 = sum of:
        0.06834751 = product of:
          0.13669503 = sum of:
            0.13669503 = weight(_text_:theory in 2822) [ClassicSimilarity], result of:
              0.13669503 = score(doc=2822,freq=6.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.63662124 = fieldWeight in 2822, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2822)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
  3. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.020987583 = product of:
      0.041975167 = sum of:
        0.041975167 = product of:
          0.08395033 = sum of:
            0.08395033 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.08395033 = score(doc=1046,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  4. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.02
    0.019730229 = product of:
      0.039460458 = sum of:
        0.039460458 = product of:
          0.078920916 = sum of:
            0.078920916 = weight(_text_:theory in 7695) [ClassicSimilarity], result of:
              0.078920916 = score(doc=7695,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.36755344 = fieldWeight in 7695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examnines Ranganathan's approach to knowledge organisation and its relevance to intellectual accessibility in libraries. Discusses the current and future developments of his methodology and theories in knowledge-based systems. Topics covered include: semi-automatic classification and structure of thesauri; user-intermediary interactions in information retrieval (IR); semantic value-theory and uncertainty principles in IR; and case grammar
  5. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017489653 = product of:
      0.034979306 = sum of:
        0.034979306 = product of:
          0.06995861 = sum of:
            0.06995861 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06995861 = score(doc=611,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  6. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017489653 = product of:
      0.034979306 = sum of:
        0.034979306 = product of:
          0.06995861 = sum of:
            0.06995861 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06995861 = score(doc=2748,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  7. Losee, R.M.: Text windows and phrases differing by discipline, location in document, and syntactic structure (1996) 0.02
    0.01726395 = product of:
      0.0345279 = sum of:
        0.0345279 = product of:
          0.0690558 = sum of:
            0.0690558 = weight(_text_:theory in 6962) [ClassicSimilarity], result of:
              0.0690558 = score(doc=6962,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.32160926 = fieldWeight in 6962, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6962)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge of window style, content, location, and grammatical structure may be used to classify documents as originating within a particular discipline or may be used to place a document on a theory vs. practice spectrum. Examines characteristics of phrases and text windows, including their number, location in documents, and grammatical construction, in addition to studying variations in these window characteristics across disciplines. Examines some of the linguistic regularities for individual disciplines, and suggests families of regularities that may provide helpful for the automatic classification of documents, as well as for information retrieval and filtering applications
  8. Huang, Y.-L.: ¬A theoretic and empirical research of cluster indexing for Mandarine Chinese full text document (1998) 0.02
    0.01726395 = product of:
      0.0345279 = sum of:
        0.0345279 = product of:
          0.0690558 = sum of:
            0.0690558 = weight(_text_:theory in 513) [ClassicSimilarity], result of:
              0.0690558 = score(doc=513,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.32160926 = fieldWeight in 513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=513)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Since most popular commercialized systems for full text retrieval are designed with full text scaning and Boolean logic query mode, these systems use an oversimplified relationship between the indexing form and the content of document. Reports the use of Singular Value Decomposition (SVD) to develop a Cluster Indexing Model (CIM) based on a Vector Space Model (VSM) in orer to explore the index theory of cluster indexing for chinese full text documents. From a series of experiments, it was found that the indexing performance of CIM is better than traditional VSM, and has almost equivalent effectiveness of the authority control of index terms
  9. Xu, Y.; Bernard, A.: Knowledge organization through statistical computation : a new approach (2009) 0.01
    0.014797671 = product of:
      0.029595342 = sum of:
        0.029595342 = product of:
          0.059190683 = sum of:
            0.059190683 = weight(_text_:theory in 3252) [ClassicSimilarity], result of:
              0.059190683 = score(doc=3252,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.27566507 = fieldWeight in 3252, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3252)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge organization (KO) is an interdisciplinary issue which includes some problems in knowledge classification such as how to classify newly emerged knowledge. With the great complexity and ambiguity of knowledge, it is becoming sometimes inefficient to classify knowledge by logical reasoning. This paper attempts to propose a statistical approach to knowledge organization in order to resolve the problems in classifying complex and mass knowledge. By integrating the classification process into a mathematical model, a knowledge classifier, based on the maximum entropy theory, is constructed and the experimental results show that the classification results acquired from the classifier are reliable. The approach proposed in this paper is quite formal and is not dependent on specific contexts, so it could easily be adapted to the use of knowledge classification in other domains within KO.
  10. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.01
    0.014797671 = product of:
      0.029595342 = sum of:
        0.029595342 = product of:
          0.059190683 = sum of:
            0.059190683 = weight(_text_:theory in 3015) [ClassicSimilarity], result of:
              0.059190683 = score(doc=3015,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.27566507 = fieldWeight in 3015, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3015)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use-both individually and collectively-over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
  11. Li, T.; Zhu, S.; Ogihara, M.: Text categorization via generalized discriminant analysis (2008) 0.01
    0.012331394 = product of:
      0.024662787 = sum of:
        0.024662787 = product of:
          0.049325574 = sum of:
            0.049325574 = weight(_text_:theory in 2119) [ClassicSimilarity], result of:
              0.049325574 = score(doc=2119,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2297209 = fieldWeight in 2119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2119)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Text categorization is an important research area and has been receiving much attention due to the growth of the on-line information and of Internet. Automated text categorization is generally cast as a multi-class classification problem. Much of previous work focused on binary document classification problems. Support vector machines (SVMs) excel in binary classification, but the elegant theory behind large-margin hyperplane cannot be easily extended to multi-class text classification. In addition, the training time and scaling are also important concerns. On the other hand, other techniques naturally extensible to handle multi-class classification are generally not as accurate as SVM. This paper presents a simple and efficient solution to multi-class text categorization. Classification problems are first formulated as optimization via discriminant analysis. Text categorization is then cast as the problem of finding coordinate transformations that reflects the inherent similarity from the data. While most of the previous approaches decompose a multi-class classification problem into multiple independent binary classification tasks, the proposed approach enables direct multi-class classification. By using generalized singular value decomposition (GSVD), a coordinate transformation that reflects the inherent class structure indicated by the generalized singular values is identified. Extensive experiments demonstrate the efficiency and effectiveness of the proposed approach.
  12. Alberts, I.; Forest, D.: Email pragmatics and automatic classification : a study in the organizational context (2012) 0.01
    0.012331394 = product of:
      0.024662787 = sum of:
        0.024662787 = product of:
          0.049325574 = sum of:
            0.049325574 = weight(_text_:theory in 238) [ClassicSimilarity], result of:
              0.049325574 = score(doc=238,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2297209 = fieldWeight in 238, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=238)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a two-phased research project aiming to improve email triage for public administration managers. The first phase developed a typology of email classification patterns through a qualitative study involving 34 participants. Inspired by the fields of pragmatics and speech act theory, this typology comprising four top level categories and 13 subcategories represents the typical email triage behaviors of managers in an organizational context. The second study phase was conducted on a corpus of 1,703 messages using email samples of two managers. Using the k-NN (k-nearest neighbor) algorithm, statistical treatments automatically classified the email according to lexical and nonlexical features representative of managers' triage patterns. The automatic classification of email according to the lexicon of the messages was found to be substantially more efficient when k = 2 and n = 2,000. For four categories, the average recall rate was 94.32%, the average precision rate was 94.50%, and the accuracy rate was 94.54%. For 13 categories, the average recall rate was 91.09%, the average precision rate was 84.18%, and the accuracy rate was 88.70%. It appears that a message's nonlexical features are also deeply influenced by email pragmatics. Features related to the recipient and the sender were the most relevant for characterizing email.
  13. Wartena, C.; Sommer, M.: Automatic classification of scientific records using the German Subject Heading Authority File (SWD) (2012) 0.01
    0.012331394 = product of:
      0.024662787 = sum of:
        0.024662787 = product of:
          0.049325574 = sum of:
            0.049325574 = weight(_text_:theory in 472) [ClassicSimilarity], result of:
              0.049325574 = score(doc=472,freq=2.0), product of:
                0.21471956 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2297209 = fieldWeight in 472, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=472)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus [http://ceur-ws.org/Vol-912/proceedings.pdf]. Eds.: A. Mitschik et al
  14. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.012242757 = product of:
      0.024485514 = sum of:
        0.024485514 = product of:
          0.048971027 = sum of:
            0.048971027 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.048971027 = score(doc=141,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  15. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.012242757 = product of:
      0.024485514 = sum of:
        0.024485514 = product of:
          0.048971027 = sum of:
            0.048971027 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.048971027 = score(doc=2338,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  16. Automatic classification research at OCLC (2002) 0.01
    0.012242757 = product of:
      0.024485514 = sum of:
        0.024485514 = product of:
          0.048971027 = sum of:
            0.048971027 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.048971027 = score(doc=1563,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  17. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.012242757 = product of:
      0.024485514 = sum of:
        0.024485514 = product of:
          0.048971027 = sum of:
            0.048971027 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.048971027 = score(doc=1673,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  18. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.012242757 = product of:
      0.024485514 = sum of:
        0.024485514 = product of:
          0.048971027 = sum of:
            0.048971027 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.048971027 = score(doc=5273,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  19. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.012242757 = product of:
      0.024485514 = sum of:
        0.024485514 = product of:
          0.048971027 = sum of:
            0.048971027 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.048971027 = score(doc=2560,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  20. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.01
    0.010493792 = product of:
      0.020987583 = sum of:
        0.020987583 = product of:
          0.041975167 = sum of:
            0.041975167 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.041975167 = score(doc=2760,freq=2.0), product of:
                0.18081778 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05163523 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:11:54