Search (40 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.11140175 = product of:
      0.3119249 = sum of:
        0.029976752 = product of:
          0.11990701 = sum of:
            0.11990701 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.11990701 = score(doc=562,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.11990701 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11990701 = score(doc=562,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.11990701 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11990701 = score(doc=562,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.03531506 = weight(_text_:representation in 562) [ClassicSimilarity], result of:
          0.03531506 = score(doc=562,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.006819073 = product of:
          0.02045722 = sum of:
            0.02045722 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.02045722 = score(doc=562,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.35714287 = coord(5/14)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.0070223557 = product of:
      0.049156487 = sum of:
        0.041200902 = weight(_text_:representation in 2338) [ClassicSimilarity], result of:
          0.041200902 = score(doc=2338,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 2338, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.023866756 = score(doc=2338,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Visualization interfaces can improve subject access by highlighting the inclusion of document representation components in similarity and discrimination relationships. Within a set of retrieved documents, what kinds of groupings can index terms and subject headings make explicit? The role of controlled vocabulary in classifying search output is examined
    Date
    22. 9.1997 19:16:05
  3. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.01
    0.006019162 = product of:
      0.042134132 = sum of:
        0.03531506 = weight(_text_:representation in 690) [ClassicSimilarity], result of:
          0.03531506 = score(doc=690,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.006819073 = product of:
          0.02045722 = sum of:
            0.02045722 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.02045722 = score(doc=690,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
  4. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.00
    0.0040127747 = product of:
      0.028089423 = sum of:
        0.023543375 = weight(_text_:representation in 2741) [ClassicSimilarity], result of:
          0.023543375 = score(doc=2741,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 2741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=2741)
        0.004546049 = product of:
          0.013638147 = sum of:
            0.013638147 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
              0.013638147 = score(doc=2741,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.15476047 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Date
    12. 9.2004 9:56:22
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  5. Koch, T.; Vizine-Goetz, D.: Automatic classification and content navigation support for Web services : DESIRE II cooperates with OCLC (1998) 0.00
    0.0029429218 = product of:
      0.041200902 = sum of:
        0.041200902 = weight(_text_:representation in 1568) [ClassicSimilarity], result of:
          0.041200902 = score(doc=1568,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35583997 = fieldWeight in 1568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1568)
      0.071428575 = coord(1/14)
    
    Abstract
    Emerging standards in knowledge representation and organization are preparing the way for distributed vocabulary support in Internet search services. NetLab researchers are exploring several innovative solutions for searching and browsing in the subject-based Internet gateway, Electronic Engineering Library, Sweden (EELS). The implementation of the EELS service is described, specifically, the generation of the robot-gathered database 'All' engineering and the automated application of the Ei thesaurus and classification scheme. NetLab and OCLC researchers are collaborating to investigate advanced solutions to automated classification in the DESIRE II context. A plan for furthering the development of distributed vocabulary support in Internet search services is offered.
  6. Sebastiani, F.: Machine learning in automated text categorization (2002) 0.00
    0.0025225044 = product of:
      0.03531506 = sum of:
        0.03531506 = weight(_text_:representation in 3389) [ClassicSimilarity], result of:
          0.03531506 = score(doc=3389,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 3389, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=3389)
      0.071428575 = coord(1/14)
    
    Abstract
    The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based an machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation.
  7. AlQenaei, Z.M.; Monarchi, D.E.: ¬The use of learning techniques to analyze the results of a manual classification system (2016) 0.00
    0.0021020873 = product of:
      0.02942922 = sum of:
        0.02942922 = weight(_text_:representation in 2836) [ClassicSimilarity], result of:
          0.02942922 = score(doc=2836,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.25417143 = fieldWeight in 2836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2836)
      0.071428575 = coord(1/14)
    
    Abstract
    Classification is the process of assigning objects to pre-defined classes based on observations or characteristics of those objects, and there are many approaches to performing this task. The overall objective of this study is to demonstrate the use of two learning techniques to analyze the results of a manual classification system. Our sample consisted of 1,026 documents, from the ACM Computing Classification System, classified by their authors as belonging to one of the groups of the classification system: "H.3 Information Storage and Retrieval." A singular value decomposition of the documents' weighted term-frequency matrix was used to represent each document in a 50-dimensional vector space. The analysis of the representation using both supervised (decision tree) and unsupervised (clustering) techniques suggests that two pairs of the ACM classes are closely related to each other in the vector space. Class 1 (Content Analysis and Indexing) is closely related to Class 3 (Information Search and Retrieval), and Class 4 (Systems and Software) is closely related to Class 5 (Online Information Services). Further analysis was performed to test the diffusion of the words in the two classes using both cosine and Euclidean distance.
  8. Panyr, J.: STEINADLER: ein Verfahren zur automatischen Deskribierung und zur automatischen thematischen Klassifikation (1978) 0.00
    0.0013106616 = product of:
      0.01834926 = sum of:
        0.01834926 = product of:
          0.055047777 = sum of:
            0.055047777 = weight(_text_:29 in 5169) [ClassicSimilarity], result of:
              0.055047777 = score(doc=5169,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.6218451 = fieldWeight in 5169, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.125 = fieldNorm(doc=5169)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Source
    Nachrichten für Dokumentation. 29(1978), S.92-96
  9. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.00
    9.7415334E-4 = product of:
      0.013638146 = sum of:
        0.013638146 = product of:
          0.04091444 = sum of:
            0.04091444 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.04091444 = score(doc=1046,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    5. 5.2003 14:17:22
  10. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.00
    8.1179454E-4 = product of:
      0.011365123 = sum of:
        0.011365123 = product of:
          0.03409537 = sum of:
            0.03409537 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.03409537 = score(doc=611,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    22. 8.2009 12:54:24
  11. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.00
    8.1179454E-4 = product of:
      0.011365123 = sum of:
        0.011365123 = product of:
          0.03409537 = sum of:
            0.03409537 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.03409537 = score(doc=2748,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    1. 2.2016 18:25:22
  12. Savic, D.: Designing an expert system for classifying office documents (1994) 0.00
    6.553308E-4 = product of:
      0.00917463 = sum of:
        0.00917463 = product of:
          0.027523888 = sum of:
            0.027523888 = weight(_text_:29 in 2655) [ClassicSimilarity], result of:
              0.027523888 = score(doc=2655,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.31092256 = fieldWeight in 2655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2655)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Source
    Records management quarterly. 28(1994) no.3, S.20-29
  13. Savic, D.: Automatic classification of office documents : review of available methods and techniques (1995) 0.00
    5.734144E-4 = product of:
      0.008027801 = sum of:
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 2219) [ClassicSimilarity], result of:
              0.024083402 = score(doc=2219,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 2219, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2219)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Source
    Records management quarterly. 29(1995) no.4, S.3-18
  14. Ruocco, A.S.; Frieder, O.: Clustering and classification of large document bases in a parallel environment (1997) 0.00
    5.734144E-4 = product of:
      0.008027801 = sum of:
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 1661) [ClassicSimilarity], result of:
              0.024083402 = score(doc=1661,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 1661, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1661)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    29. 7.1998 17:45:02
  15. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.00
    5.734144E-4 = product of:
      0.008027801 = sum of:
        0.008027801 = product of:
          0.024083402 = sum of:
            0.024083402 = weight(_text_:29 in 1595) [ClassicSimilarity], result of:
              0.024083402 = score(doc=1595,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.27205724 = fieldWeight in 1595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    11. 5.2003 18:29:44
  16. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    5.6825613E-4 = product of:
      0.007955586 = sum of:
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.023866756 = score(doc=141,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Pages
    S.1-22
  17. Automatic classification research at OCLC (2002) 0.00
    5.6825613E-4 = product of:
      0.007955586 = sum of:
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.023866756 = score(doc=1563,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    5. 5.2003 9:22:09
  18. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.00
    5.6825613E-4 = product of:
      0.007955586 = sum of:
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.023866756 = score(doc=1673,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    1. 8.1996 22:08:06
  19. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    5.6825613E-4 = product of:
      0.007955586 = sum of:
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.023866756 = score(doc=5273,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    22. 7.2006 16:24:52
  20. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.00
    5.6825613E-4 = product of:
      0.007955586 = sum of:
        0.007955586 = product of:
          0.023866756 = sum of:
            0.023866756 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.023866756 = score(doc=2560,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.33333334 = coord(1/3)
      0.071428575 = coord(1/14)
    
    Date
    22. 9.2008 18:31:54