Search (27 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.57
    0.56754506 = product of:
      0.6621359 = sum of:
        0.06123322 = product of:
          0.18369965 = sum of:
            0.18369965 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.18369965 = score(doc=562,freq=2.0), product of:
                0.3268572 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.038553525 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.18369965 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18369965 = score(doc=562,freq=2.0), product of:
            0.3268572 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038553525 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.18369965 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18369965 = score(doc=562,freq=2.0), product of:
            0.3268572 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038553525 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.03413332 = weight(_text_:computer in 562) [ClassicSimilarity], result of:
          0.03413332 = score(doc=562,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.24226204 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.18369965 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.18369965 = score(doc=562,freq=2.0), product of:
            0.3268572 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.038553525 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.015670411 = product of:
          0.031340823 = sum of:
            0.031340823 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.031340823 = score(doc=562,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.85714287 = coord(6/7)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Imprint
    Washington, DC : IEEE Computer Society
  2. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.023716064 = product of:
      0.08300622 = sum of:
        0.056888867 = weight(_text_:computer in 2748) [ClassicSimilarity], result of:
          0.056888867 = score(doc=2748,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.40377006 = fieldWeight in 2748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=2748)
        0.026117353 = product of:
          0.052234706 = sum of:
            0.052234706 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.052234706 = score(doc=2748,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  3. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.02
    0.016601244 = product of:
      0.05810435 = sum of:
        0.039822206 = weight(_text_:computer in 1673) [ClassicSimilarity], result of:
          0.039822206 = score(doc=1673,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28263903 = fieldWeight in 1673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.018282145 = product of:
          0.03656429 = sum of:
            0.03656429 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.03656429 = score(doc=1673,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 30(1998) nos.1/7, S.646-648
  4. Subramanian, S.; Shafer, K.E.: Clustering (1998) 0.01
    0.008126982 = product of:
      0.056888867 = sum of:
        0.056888867 = weight(_text_:computer in 1103) [ClassicSimilarity], result of:
          0.056888867 = score(doc=1103,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.40377006 = fieldWeight in 1103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=1103)
      0.14285715 = coord(1/7)
    
    Abstract
    This article presents our exploration of computer science clustering algorithms as they relate to the Scorpion system. Scorpion is a research project at OCLC that explores the indexing and cataloging of electronic resources. For a more complete description of the Scorpion, please visit the Scorpion Web site at <http://purl.oclc.org/scorpion>
  5. Savic, D.: Automatic classification of office documents : review of available methods and techniques (1995) 0.01
    0.0056888866 = product of:
      0.039822206 = sum of:
        0.039822206 = weight(_text_:computer in 2219) [ClassicSimilarity], result of:
          0.039822206 = score(doc=2219,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28263903 = fieldWeight in 2219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2219)
      0.14285715 = coord(1/7)
    
    Abstract
    Classification of office documents is one of the administrative functions carried out by almost every organization and institution which sends and receives correspondence. Processing of this increasing amount of information coming and out going mail, in particular its classification, is time consuming and expensive. More and more organizations are seeking a solution for meeting this challenge by designing computer based systems for automatic classification. Examines the present status of available knowledge and methodology which can be used for automatic classification of office documents. Besides a review of classic methods and techniques, the focus id also placed on the application of artificial intelligence
  6. Rose, J.R.; Gasteiger, J.: HORACE: an automatic system for the hierarchical classification of chemical reactions (1994) 0.01
    0.0056888866 = product of:
      0.039822206 = sum of:
        0.039822206 = weight(_text_:computer in 7696) [ClassicSimilarity], result of:
          0.039822206 = score(doc=7696,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28263903 = fieldWeight in 7696, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7696)
      0.14285715 = coord(1/7)
    
    Source
    Journal of chemical information and computer sciences. 34(1994) no.1, S.74-90
  7. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.00
    0.0048761885 = product of:
      0.03413332 = sum of:
        0.03413332 = weight(_text_:computer in 3015) [ClassicSimilarity], result of:
          0.03413332 = score(doc=3015,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.24226204 = fieldWeight in 3015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3015)
      0.14285715 = coord(1/7)
    
    Abstract
    We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use-both individually and collectively-over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
  8. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.00
    0.0044772606 = product of:
      0.031340823 = sum of:
        0.031340823 = product of:
          0.062681645 = sum of:
            0.062681645 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.062681645 = score(doc=1046,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 5.2003 14:17:22
  9. Adams, K.C.: Word wranglers : Automatic classification tools transform enterprise documents from "bags of words" into knowledge resources (2003) 0.00
    0.004063491 = product of:
      0.028444434 = sum of:
        0.028444434 = weight(_text_:computer in 1665) [ClassicSimilarity], result of:
          0.028444434 = score(doc=1665,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.20188503 = fieldWeight in 1665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1665)
      0.14285715 = coord(1/7)
    
    Abstract
    Taxonomies are an important part of any knowledge management (KM) system, and automatic classification software is emerging as a "killer app" for consumer and enterprise portals. A number of companies such as Inxight Software , Mohomine, Metacode, and others claim to interpret the semantic content of any textual document and automatically classify text on the fly. The promise that software could automatically produce a Yahoo-style directory is a siren call not many IT managers are able to resist. KM needs have grown more complex due to the increasing amount of digital information, the declining effectiveness of keyword searching, and heterogeneous document formats in corporate databases. This environment requires innovative KM tools, and automatic classification technology is an example of this new kind of software. These products can be divided into three categories according to their underlying technology - rules-based, catalog-by-example, and statistical clustering. Evolving trends in this market include framing classification as a cyborg (computer- and human-based) activity and the increasing use of extensible markup language (XML) and support vector machine (SVM) technology. In this article, we'll survey the rapidly changing automatic classification software market and examine the features and capabilities of leading classification products.
  10. Golub, K.: Automated subject classification of textual web documents (2006) 0.00
    0.004063491 = product of:
      0.028444434 = sum of:
        0.028444434 = weight(_text_:computer in 5600) [ClassicSimilarity], result of:
          0.028444434 = score(doc=5600,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.20188503 = fieldWeight in 5600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5600)
      0.14285715 = coord(1/7)
    
    Abstract
    Purpose - To provide an integrated perspective to similarities and differences between approaches to automated classification in different research communities (machine learning, information retrieval and library science), and point to problems with the approaches and automated classification as such. Design/methodology/approach - A range of works dealing with automated classification of full-text web documents are discussed. Explorations of individual approaches are given in the following sections: special features (description, differences, evaluation), application and characteristics of web pages. Findings - Provides major similarities and differences between the three approaches: document pre-processing and utilization of web-specific document characteristics is common to all the approaches; major differences are in applied algorithms, employment or not of the vector space model and of controlled vocabularies. Problems of automated classification are recognized. Research limitations/implications - The paper does not attempt to provide an exhaustive bibliography of related resources. Practical implications - As an integrated overview of approaches from different research communities with application examples, it is very useful for students in library and information science and computer science, as well as for practitioners. Researchers from one community have the information on how similar tasks are conducted in different communities. Originality/value - To the author's knowledge, no review paper on automated text classification attempted to discuss more than one community's approach from an integrated perspective.
  11. Golub, K.; Soergel, D.; Buchanan, G.; Tudhope, D.; Lykke, M.; Hiom, D.: ¬A framework for evaluating automatic indexing or classification in the context of retrieval (2016) 0.00
    0.004063491 = product of:
      0.028444434 = sum of:
        0.028444434 = weight(_text_:computer in 3311) [ClassicSimilarity], result of:
          0.028444434 = score(doc=3311,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.20188503 = fieldWeight in 3311, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3311)
      0.14285715 = coord(1/7)
    
    Abstract
    Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. Although some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The article reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single "gold standard" method when evaluating indexing and retrieval, and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on evaluation approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard, evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
  12. Smiraglia, R.P.; Cai, X.: Tracking the evolution of clustering, machine learning, automatic indexing and automatic classification in knowledge organization (2017) 0.00
    0.004063491 = product of:
      0.028444434 = sum of:
        0.028444434 = weight(_text_:computer in 3627) [ClassicSimilarity], result of:
          0.028444434 = score(doc=3627,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.20188503 = fieldWeight in 3627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3627)
      0.14285715 = coord(1/7)
    
    Abstract
    A very important extension of the traditional domain of knowledge organization (KO) arises from attempts to incorporate techniques devised in the computer science domain for automatic concept extraction and for grouping, categorizing, clustering and otherwise organizing knowledge using mechanical means. Four specific terms have emerged to identify the most prevalent techniques: machine learning, clustering, automatic indexing, and automatic classification. Our study presents three domain analytical case analyses in search of answers. The first case relies on citations located using the ISKO-supported "Knowledge Organization Bibliography." The second case relies on works in both Web of Science and SCOPUS. Case three applies co-word analysis and citation analysis to the contents of the papers in the present special issue. We observe scholars involved in "clustering" and "automatic classification" who share common thematic emphases. But we have found no coherence, no common activity and no social semantics. We have not found a research front, or a common teleology within the KO domain. We also have found a lively group of authors who have succeeded in submitting papers to this special issue, and their work quite interestingly aligns with the case studies we report. There is an emphasis on KO for information retrieval; there is much work on clustering (which involves conceptual points within texts) and automatic classification (which involves semantic groupings at the meta-document level).
  13. Borko, H.: Research in computer based classification systems (1985) 0.00
    0.0040226504 = product of:
      0.028158553 = sum of:
        0.028158553 = weight(_text_:computer in 3647) [ClassicSimilarity], result of:
          0.028158553 = score(doc=3647,freq=4.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.19985598 = fieldWeight in 3647, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3647)
      0.14285715 = coord(1/7)
    
    Abstract
    The selection in this reader by R. M. Needham and K. Sparck Jones reports an early approach to automatic classification that was taken in England. The following selection reviews various approaches that were being pursued in the United States at about the same time. It then discusses a particular approach initiated in the early 1960s by Harold Borko, at that time Head of the Language Processing and Retrieval Research Staff at the System Development Corporation, Santa Monica, California and, since 1966, a member of the faculty at the Graduate School of Library and Information Science, University of California, Los Angeles. As was described earlier, there are two steps in automatic classification, the first being to identify pairs of terms that are similar by virtue of co-occurring as index terms in the same documents, and the second being to form equivalence classes of intersubstitutable terms. To compute similarities, Borko and his associates used a standard correlation formula; to derive classification categories, where Needham and Sparck Jones used clumping, the Borko team used the statistical technique of factor analysis. The fact that documents can be classified automatically, and in any number of ways, is worthy of passing notice. Worthy of serious attention would be a demonstra tion that a computer-based classification system was effective in the organization and retrieval of documents. One reason for the inclusion of the following selection in the reader is that it addresses the question of evaluation. To evaluate the effectiveness of their automatically derived classification, Borko and his team asked three questions. The first was Is the classification reliable? in other words, could the categories derived from one sample of texts be used to classify other texts? Reliability was assessed by a case-study comparison of the classes derived from three different samples of abstracts. The notso-surprising conclusion reached was that automatically derived classes were reliable only to the extent that the sample from which they were derived was representative of the total document collection. The second evaluation question asked whether the classification was reasonable, in the sense of adequately describing the content of the document collection. The answer was sought by comparing the automatically derived categories with categories in a related classification system that was manually constructed. Here the conclusion was that the automatic method yielded categories that fairly accurately reflected the major area of interest in the sample collection of texts; however, since there were only eleven such categories and they were quite broad, they could not be regarded as suitable for use in a university or any large general library. The third evaluation question asked whether automatic classification was accurate, in the sense of producing results similar to those obtainabie by human cIassifiers. When using human classification as a criterion, automatic classification was found to be 50 percent accurate.
  14. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.00
    0.0037310505 = product of:
      0.026117353 = sum of:
        0.026117353 = product of:
          0.052234706 = sum of:
            0.052234706 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.052234706 = score(doc=611,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 8.2009 12:54:24
  15. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.00
    0.0026117351 = product of:
      0.018282145 = sum of:
        0.018282145 = product of:
          0.03656429 = sum of:
            0.03656429 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.03656429 = score(doc=141,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Pages
    S.1-22
  16. Dubin, D.: Dimensions and discriminability (1998) 0.00
    0.0026117351 = product of:
      0.018282145 = sum of:
        0.018282145 = product of:
          0.03656429 = sum of:
            0.03656429 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.03656429 = score(doc=2338,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.1997 19:16:05
  17. Automatic classification research at OCLC (2002) 0.00
    0.0026117351 = product of:
      0.018282145 = sum of:
        0.018282145 = product of:
          0.03656429 = sum of:
            0.03656429 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.03656429 = score(doc=1563,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    5. 5.2003 9:22:09
  18. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.00
    0.0026117351 = product of:
      0.018282145 = sum of:
        0.018282145 = product of:
          0.03656429 = sum of:
            0.03656429 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.03656429 = score(doc=5273,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 16:24:52
  19. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.00
    0.0026117351 = product of:
      0.018282145 = sum of:
        0.018282145 = product of:
          0.03656429 = sum of:
            0.03656429 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.03656429 = score(doc=2560,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 9.2008 18:31:54
  20. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.00
    0.0022386303 = product of:
      0.015670411 = sum of:
        0.015670411 = product of:
          0.031340823 = sum of:
            0.031340823 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
              0.031340823 = score(doc=2760,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.23214069 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 3.2009 19:11:54