Search (31 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Guerrero-Bote, V.P.; Moya Anegón, F. de; Herrero Solana, V.: Document organization using Kohonen's algorithm (2002) 0.07
    0.073703736 = product of:
      0.1105556 = sum of:
        0.074017175 = weight(_text_:reference in 2564) [ClassicSimilarity], result of:
          0.074017175 = score(doc=2564,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.35959643 = fieldWeight in 2564, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.0625 = fieldNorm(doc=2564)
        0.036538422 = product of:
          0.073076844 = sum of:
            0.073076844 = weight(_text_:database in 2564) [ClassicSimilarity], result of:
              0.073076844 = score(doc=2564,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.35730496 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The classification of documents from a bibliographic database is a task that is linked to processes of information retrieval based on partial matching. A method is described of vectorizing reference documents from LISA which permits their topological organization using Kohonen's algorithm. As an example a map is generated of 202 documents from LISA, and an analysis is made of the possibilities of this type of neural network with respect to the development of information retrieval systems based on graphical browsing.
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.07
    0.06728035 = product of:
      0.10092052 = sum of:
        0.08035626 = product of:
          0.24106878 = sum of:
            0.24106878 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.24106878 = score(doc=562,freq=2.0), product of:
                0.42893425 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050593734 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.020564256 = product of:
          0.041128512 = sum of:
            0.041128512 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.041128512 = score(doc=562,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.02
    0.02131408 = product of:
      0.06394224 = sum of:
        0.06394224 = product of:
          0.12788448 = sum of:
            0.12788448 = weight(_text_:database in 3940) [ClassicSimilarity], result of:
              0.12788448 = score(doc=3940,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.62528366 = fieldWeight in 3940, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3940)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  4. Denoyer, L.; Gallinari, P.: Bayesian network model for semi-structured document classification (2004) 0.02
    0.018504292 = product of:
      0.055512875 = sum of:
        0.055512875 = weight(_text_:reference in 995) [ClassicSimilarity], result of:
          0.055512875 = score(doc=995,freq=2.0), product of:
            0.205834 = queryWeight, product of:
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.050593734 = queryNorm
            0.2696973 = fieldWeight in 995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.0683694 = idf(docFreq=2055, maxDocs=44218)
              0.046875 = fieldNorm(doc=995)
      0.33333334 = coord(1/3)
    
    Abstract
    Recently, a new community has started to emerge around the development of new information research methods for searching and analyzing semi-structured and XML like documents. The goal is to handle both content and structural information, and to deal with different types of information content (text, image, etc.). We consider here the task of structured document classification. We propose a generative model able to handle both structure and content which is based on Bayesian networks. We then show how to transform this generative model into a discriminant classifier using the method of Fisher kernel. The model is then extended for dealing with different types of content information (here text and images). The model was tested on three databases: the classical webKB corpus composed of HTML pages, the new INEX corpus which has become a reference in the field of ad-hoc retrieval for XML documents, and a multimedia corpus of Web pages.
  5. Godby, C. J.; Stuler, J.: ¬The Library of Congress Classification as a knowledge base for automatic subject categorization (2001) 0.02
    0.017224379 = product of:
      0.051673137 = sum of:
        0.051673137 = product of:
          0.10334627 = sum of:
            0.10334627 = weight(_text_:database in 1567) [ClassicSimilarity], result of:
              0.10334627 = score(doc=1567,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.5053055 = fieldWeight in 1567, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1567)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper describes a set of experiments in adapting a subset of the Library of Congress Classification for use as a database for automatic classification. A high degree of concept integrity was obtained when subject headings were mapped from OCLC's WorldCat database and filtered using the log-likelihood statistic
  6. Koch, T.; Ardö, A.; Noodén, L.: ¬The construction of a robot-generated subject index : DESIRE II D3.6a, Working Paper 1 (1999) 0.02
    0.015821602 = product of:
      0.047464807 = sum of:
        0.047464807 = product of:
          0.09492961 = sum of:
            0.09492961 = weight(_text_:database in 1668) [ClassicSimilarity], result of:
              0.09492961 = score(doc=1668,freq=6.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46415278 = fieldWeight in 1668, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1668)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This working paper describes the creation of a test database to carry out the automatic classification tasks of the DESIRE II work package D3.6a on. It is an improved version of NetLab's existing "All" Engineering database created after a comparative study of the outcome of two different approaches to collecting the documents. These two methods were selected from seven different general methodologies to build robot-generated subject indices, presented in this paper. We found a surprisingly low overlap between the Engineering link collections we used as seed pages for the robot and subsequently an even more surprisingly low overlap between the resources collected by the two different approaches. That inspite of using basically the same services to start the harvesting process from. A intellectual evaluation of the contents of both databases showed almost exactly the same percentage of relevant documents (77%), indicating that the main difference between those aproaches was the coverage of the resulting database.
  7. Godby, C.J.; Stuler, J.: ¬The Library of Congress Classification as a knowledge base for automatic subject categorization : subject access issues (2003) 0.02
    0.0150713315 = product of:
      0.045213994 = sum of:
        0.045213994 = product of:
          0.09042799 = sum of:
            0.09042799 = weight(_text_:database in 3962) [ClassicSimilarity], result of:
              0.09042799 = score(doc=3962,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.44214234 = fieldWeight in 3962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper describes a set of experiments in adapting a subset of the Library of Congress Classification for use as a database for automatic classification. A high degree of concept integrity was obtained when subject headings were mapped from OCLC's WorldCat database and filtered using the log-likelihood statistic.
  8. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.013709504 = product of:
      0.041128512 = sum of:
        0.041128512 = product of:
          0.082257025 = sum of:
            0.082257025 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.082257025 = score(doc=1046,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 14:17:22
  9. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.01
    0.012918284 = product of:
      0.03875485 = sum of:
        0.03875485 = product of:
          0.0775097 = sum of:
            0.0775097 = weight(_text_:database in 3464) [ClassicSimilarity], result of:
              0.0775097 = score(doc=3464,freq=4.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.37897915 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
  10. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.01
    0.011424588 = product of:
      0.034273762 = sum of:
        0.034273762 = product of:
          0.068547525 = sum of:
            0.068547525 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.068547525 = score(doc=611,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 8.2009 12:54:24
  11. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.01
    0.011424588 = product of:
      0.034273762 = sum of:
        0.034273762 = product of:
          0.068547525 = sum of:
            0.068547525 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.068547525 = score(doc=2748,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  12. Koch, T.; Vizine-Goetz, D.: Automatic classification and content navigation support for Web services : DESIRE II cooperates with OCLC (1998) 0.01
    0.01065704 = product of:
      0.03197112 = sum of:
        0.03197112 = product of:
          0.06394224 = sum of:
            0.06394224 = weight(_text_:database in 1568) [ClassicSimilarity], result of:
              0.06394224 = score(doc=1568,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.31264183 = fieldWeight in 1568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1568)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Emerging standards in knowledge representation and organization are preparing the way for distributed vocabulary support in Internet search services. NetLab researchers are exploring several innovative solutions for searching and browsing in the subject-based Internet gateway, Electronic Engineering Library, Sweden (EELS). The implementation of the EELS service is described, specifically, the generation of the robot-gathered database 'All' engineering and the automated application of the Ei thesaurus and classification scheme. NetLab and OCLC researchers are collaborating to investigate advanced solutions to automated classification in the DESIRE II context. A plan for furthering the development of distributed vocabulary support in Internet search services is offered.
  13. Larson, R.R.: Experiments in automatic Library of Congress Classification (1992) 0.01
    0.009134606 = product of:
      0.027403818 = sum of:
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 1054) [ClassicSimilarity], result of:
              0.054807637 = score(doc=1054,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 1054, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1054)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents the results of research into the automatic selection of Library of Congress Classification numbers based on the titles and subject headings in MARC records. The method used in this study was based on partial match retrieval techniques using various elements of new recors (i.e., those to be classified) as "queries", and a test database of classification clusters generated from previously classified MARC records. Sixty individual methods for automatic classification were tested on a set of 283 new records, using all combinations of four different partial match methods, five query types, and three representations of search terms. The results indicate that if the best method for a particular case can be determined, then up to 86% of the new records may be correctly classified. The single method with the best accuracy was able to select the correct classification for about 46% of the new records.
  14. Golub, K.; Hamon, T.; Ardö, A.: Automated classification of textual documents based on a controlled vocabulary in engineering (2007) 0.01
    0.009134606 = product of:
      0.027403818 = sum of:
        0.027403818 = product of:
          0.054807637 = sum of:
            0.054807637 = weight(_text_:database in 1461) [ClassicSimilarity], result of:
              0.054807637 = score(doc=1461,freq=2.0), product of:
                0.20452234 = queryWeight, product of:
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.050593734 = queryNorm
                0.26797873 = fieldWeight in 1461, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.042444 = idf(docFreq=2109, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1461)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automated subject classification has been a challenging research issue for many years now, receiving particular attention in the past decade due to rapid increase of digital documents. The most frequent approach to automated classification is machine learning. It, however, requires training documents and performs well on new documents only if these are similar enough to the former. We explore a string-matching algorithm based on a controlled vocabulary, which does not require training documents - instead it reuses the intellectual work put into creating the controlled vocabulary. Terms from the Engineering Information thesaurus and classification scheme were matched against title and abstract of engineering papers from the Compendex database. Simple string-matching was enhanced by several methods such as term weighting schemes and cut-offs, exclusion of certain terms, and en- richment of the controlled vocabulary with automatically extracted terms. The best results are 76% recall when the controlled vocabulary is enriched with new terms, and 79% precision when certain terms are excluded. Precision of individual classes is up to 98%. These results are comparable to state-of-the-art machine-learning algorithms.
  15. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.047983266 = score(doc=141,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Pages
    S.1-22
  16. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.047983266 = score(doc=2338,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.1997 19:16:05
  17. Automatic classification research at OCLC (2002) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.047983266 = score(doc=1563,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    5. 5.2003 9:22:09
  18. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.047983266 = score(doc=1673,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 8.1996 22:08:06
  19. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.047983266 = score(doc=5273,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 16:24:52
  20. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.007997211 = product of:
      0.023991633 = sum of:
        0.023991633 = product of:
          0.047983266 = sum of:
            0.047983266 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.047983266 = score(doc=2560,freq=2.0), product of:
                0.17717063 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050593734 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 9.2008 18:31:54